Webjala
Register Login

Porting Existing Account Data to PennyPipe

PennyPipe

This guide helps end users move account data from another system into PennyPipe safely.

What "porting" means in PennyPipe

PennyPipe stores account history as ledger entries (credit and debit).
To port data, you can:

  • enter a few rows manually in the PennyPipe UI (best for small datasets), or

  • import many rows through the integration API batch endpoint (best for large datasets).

Before you start

Prepare this checklist first:

  • Confirm the target PennyPipe account exists and has the correct currency.

  • Decide your migration cutoff date.

  • Export your old data to CSV from the source system.

  • Normalize columns to this minimum shape:

    • date

    • type (credit or debit)

    • amount (positive decimal)

    • note (optional)

  • Remove duplicates and test with 5 to 10 rows first.

Option A: Manual entry (small volumes)

Use this if you only need to port a limited number of rows.

  1. Open the account in PennyPipe.

  2. Go to the account ledger page.

  3. Add entries one by one as credit or debit.

  4. Use notes and tags to preserve source references (for example, old transaction id).

  5. Verify running balance after each small batch.

Recommended for up to about 50 rows.

Option B: Bulk port using integration API (recommended)

Use this for larger migrations.

1) Register integration access

A PennyPipe account manager must register your OAuth client on the account.

  • POST /api/accounts/{accountId}/integration-clients

  • Body:

{
  "oauthClientId": "your-client-id"
}

2) Get access token

Request a client credentials token with:

  • webjala_pennypipe_api

  • pennypipe.integration.ledger.write

2.5) Set per-entry referenceId for migration upserts

For API-fed migration rows, send entry.referenceId as the source transaction id (for example, StockSmart sales id).

  • referenceId is optional and at most 200 characters.

  • It is integration-only. Manual UI posts (manager/bookkeeper) must leave it empty; the API rejects user-posted entries that carry a referenceId.

  • When present, it is treated as unique within one account's ledger (same accountId), matched case-insensitively and trim-insensitively.

  • Reposting the same referenceId updates the existing ledger row in place (note, amount, type, tags, custom fields are overwritten) instead of creating a duplicate. This is the recommended way to handle "edits" or "corrections" coming from the source system.

  • Transfer entries (rows created by the in-app Transfer flow) are excluded from referenceId upserts — they remain immutable.

referenceId vs lineId

These are different and should both be set during migrations:

Field

Scope

Purpose

lineId

Per batch request

Lets the batch response map results back to your CSV rows. Not stored on the ledger row.

referenceId

Per ledger entry, account-wide

Persisted on the entry. Drives cross-batch idempotency / upsert. Visible in the UI.

Visibility in the PennyPipe UI

After import, open the account ledger page (/accounts/{accountId}/ledger) and click any imported row. The right-side Entry details panel shows the Reference ID field (only when set). Use this to spot-check that your migration ran with the expected source identifiers and to support audit/reconciliation conversations.

3) Convert CSV rows to batch payload

Send rows to:

  • POST /api/accounts/{accountId}/ledger/batch

Payload shape:

{
  "items": [
    {
      "lineId": "legacy-row-1001",
      "entry": {
        "entryType": "credit",
        "amount": 120.50,
        "referenceId": "stocksmart-sale-1001",
        "note": "Imported: INV-1001",
        "tagNames": [ "migration-2026-04", "legacy" ],
        "createMissingTags": true
      }
    },
    {
      "lineId": "legacy-row-1002",
      "entry": {
        "entryType": "debit",
        "amount": 20.00,
        "referenceId": "stocksmart-sale-1002",
        "note": "Imported: Fee",
        "tagNames": [ "migration-2026-04", "legacy" ],
        "createMissingTags": true
      }
    }
  ]
}

4) Use idempotent line ids

Set lineId to a stable source identifier (for example, source row id).
If you retry the same batch, PennyPipe can avoid double-posting the same data.

5) Validate after import

After each batch:

  • Compare source row count vs imported success count.

  • Compare source totals vs PennyPipe ledger summary.

  • Export the imported ledger to CSV and reconcile.

  • Open a few rows in the ledger UI and confirm the Reference ID field in the right-side details panel matches the source identifier you sent.

Suggested migration strategy

For the safest result:

  1. Post one "opening balance" entry at migration cutoff date.

  2. Import only transactions after cutoff date.

  3. Run a reconciliation report.

  4. Keep the source transaction id in referenceId for audit tracing (and use the

    same value as lineId so batch responses are easy to correlate with source rows). Storing it in note is no longer necessary — referenceId is persisted on the entry and visible in the ledger UI's details panel.

Common errors and fixes

  • 401/403 unauthorized: token is missing required scopes or client is not registered on account.

  • Validation error on entryType: use only credit or debit.

  • ReferenceId is only allowed for integration API feeds: a user-bearer token (manager/bookkeeper) tried to post with referenceId. Drop the field for UI/manual posts, or post via the integration OAuth client.

  • ReferenceId must be at most 200 characters: trim or hash long source ids before sending.

  • Unexpected updates to existing rows: most likely a referenceId collision — remember the match is case-insensitive within an account. Either send a unique value or accept the upsert (this is the intended behavior for source-of-truth re-syncs).

  • Duplicate import risk: always send stable lineId values and a stable referenceId. lineId protects against retries of the same batch; referenceId protects against re-running the migration end-to-end.

  • Balance mismatch: check sign mapping and whether source amounts include fees/tax splits.

Operational best practices

  • Import in chunks (for example 100 to 500 rows per call).

  • Keep a migration log (batch id, row count, totals, timestamp).

  • Test full flow in a non-production account first.

  • Freeze writes in old system during final cutover window.

Need automation?

If your team wants, engineering can provide a small one-time migration script:

  • input: CSV export from your legacy tool

  • transform: map columns to PennyPipe schema

  • load: call /ledger/batch with retries and reconciliation output

This is the fastest and safest path for large historical migrations.