Entity Service Permutation Output

This tutorial demonstrates the workflow for private record linkage using the entity service. Two parties Alice and Bob have a dataset of personally identifiable information (PII) of several entities. They want to learn the linkage of corresponding entities between their respective datasets with the help of the entity service and an independent party, the Analyst.

The chosen output type is permuatations, which consists of two permutations and one mask.

Who learns what?

After the linkage has been carried out Alice and Bob will be able to retrieve a permutation - a reordering of their respective data sets such that shared entities line up.

The Analyst - who creates the linkage project - learns the mask. The mask is a binary vector that indicates which rows in the permuted data sets are aligned. Note this reveals how many entities are shared.

Steps

These steps are usually run by different companies - but for illustration all is carried out in this one file. The participants providing data are Alice and Bob, and the Analyst acting the integration authority.

Check Connection

If you’re connecting to a custom entity service, change the address here. Or set the environment variable SERVER before launching the Jupyter notebook.
[1]:
import os
url = os.getenv("SERVER", "https://anonlink.easd.data61.xyz")
print(f'Testing anonlink-entity-service hosted at {url}')
Testing anonlink-entity-service hosted at https://anonlink.easd.data61.xyz
[2]:
!anonlink status --server "{url}"
{"project_count": 846, "rate": 593838, "status": "ok"}

Data preparation

Following the anonlink-client command line tutorial we will use a dataset from the recordlinkage library. We will just write both datasets out to temporary CSV files.

[3]:
from tempfile import NamedTemporaryFile
from recordlinkage.datasets import load_febrl4
[4]:
dfA, dfB = load_febrl4()

a_csv = NamedTemporaryFile('w')
a_clks = NamedTemporaryFile('w', suffix='.json')
dfA.to_csv(a_csv)
a_csv.seek(0)

b_csv = NamedTemporaryFile('w')
b_clks = NamedTemporaryFile('w', suffix='.json')
dfB.to_csv(b_csv)
b_csv.seek(0)

dfA.head(3)

[4]:
given_name surname street_number address_1 address_2 suburb postcode state date_of_birth soc_sec_id
rec_id
rec-1070-org michaela neumann 8 stanley street miami winston hills 4223 nsw 19151111 5304218
rec-1016-org courtney painter 12 pinkerton circuit bega flats richlands 4560 vic 19161214 4066625
rec-4405-org charles green 38 salkauskas crescent kela dapto 4566 nsw 19480930 4365168

Schema Preparation

The linkage schema must be agreed on by the two parties. A hashing schema instructs clkhash how to treat each column for generating CLKs. A detailed description of the hashing schema can be found in the clkhash schema docs. We will ignore the columns ‘rec_id’ and ‘soc_sec_id’ for CLK generation.

[5]:
schema = NamedTemporaryFile('wt')
[6]:
%%writefile {schema.name}
{
  "version": 3,
  "clkConfig": {
    "l": 1024,
    "xor_folds": 0,
    "kdf": {
      "type": "HKDF",
      "hash": "SHA256",
      "info": "c2NoZW1hX2V4YW1wbGU=",
      "salt": "SCbL2zHNnmsckfzchsNkZY9XoHk96P/G5nUBrM7ybymlEFsMV6PAeDZCNp3rfNUPCtLDMOGQHG4pCQpfhiHCyA==",
      "keySize": 64
    }
  },
  "features": [
    {
      "identifier": "rec_id",
      "ignored": true
    },
    {
      "identifier": "given_name",
      "format": {
        "type": "string",
        "encoding": "utf-8"
      },
      "hashing": {
        "strategy": {
          "bitsPerToken": 30
        },
        "hash": {
          "type": "doubleHash"
        },
        "comparison": {
          "type": "ngram",
          "n": 2,
          "positional": false
        }
      }
    },
    {
      "identifier": "surname",
      "format": {
        "type": "string",
        "encoding": "utf-8"
      },
      "hashing": {
        "strategy": {
          "bitsPerToken": 30
        },
        "hash": {
          "type": "doubleHash"
        },
        "comparison": {
          "type": "ngram",
          "n": 2,
          "positional": false
        }
      }
    },
    {
      "identifier": "street_number",
      "format": {
        "type": "integer"
      },
      "hashing": {
        "missingValue": {
          "sentinel": ""
        },
        "strategy": {
          "bitsPerToken": 15
        },
        "hash": {
          "type": "doubleHash"
        },
        "comparison": {
          "type": "ngram",
          "n": 1,
          "positional": true
        }
      }
    },
    {
      "identifier": "address_1",
      "format": {
        "type": "string",
        "encoding": "utf-8"
      },
      "hashing": {
        "strategy": {
          "bitsPerToken": 15
        },
        "hash": {
          "type": "doubleHash"
        },
        "comparison": {
          "type": "ngram",
          "n": 2,
          "positional": false
        }
      }
    },
    {
      "identifier": "address_2",
      "format": {
        "type": "string",
        "encoding": "utf-8"
      },
      "hashing": {
        "strategy": {
          "bitsPerToken": 15
        },
        "hash": {
          "type": "doubleHash"
        },
        "comparison": {
          "type": "ngram",
          "n": 2,
          "positional": false
        }
      }
    },
    {
      "identifier": "suburb",
      "format": {
        "type": "string",
        "encoding": "utf-8"
      },
      "hashing": {
        "strategy": {
          "bitsPerToken": 15
        },
        "hash": {
          "type": "doubleHash"
        },
        "comparison": {
          "type": "ngram",
          "n": 2,
          "positional": false
        }
      }
    },
    {
      "identifier": "postcode",
      "format": {
        "type": "integer",
        "minimum": 100,
        "maximum": 9999
      },
      "hashing": {
        "strategy": {
          "bitsPerToken": 15
        },
        "hash": {
          "type": "doubleHash"
        },
        "comparison": {
          "type": "ngram",
          "n": 1,
          "positional": true
        }
      }
    },
    {
      "identifier": "state",
      "format": {
        "type": "string",
        "encoding": "utf-8",
        "maxLength": 3
      },
      "hashing": {
        "strategy": {
          "bitsPerToken": 30
        },
        "hash": {
          "type": "doubleHash"
        },
        "comparison": {
          "type": "ngram",
          "n": 2,
          "positional": false
        }
      }
    },
    {
      "identifier": "date_of_birth",
      "format": {
        "type": "integer"
      },
      "hashing": {
        "missingValue": {
          "sentinel": ""
        },
        "strategy": {
          "bitsPerToken": 30
        },
        "hash": {
          "type": "doubleHash"
        },
        "comparison": {
          "type": "ngram",
          "n": 1,
          "positional": true
        }
      }
    },
    {
      "identifier": "soc_sec_id",
      "ignored": true
    }
  ]
}
Overwriting /tmp/tmplm0udc70

Create Linkage Project

The analyst carrying out the linkage starts by creating a linkage project of the desired output type with the Entity Service.

[7]:
creds = NamedTemporaryFile('wt')
print("Credentials will be saved in", creds.name)

!anonlink create-project \
    --schema "{schema.name}" \
    --output "{creds.name}" \
    --type "permutations" \
    --server "{url}"

creds.seek(0)

import json
with open(creds.name, 'r') as f:
    credentials = json.load(f)

project_id = credentials['project_id']
credentials
Credentials will be saved in /tmp/tmp_d0pcu7x
Project created
[7]:
{'project_id': 'd9ffdb48df4cc0acb4f0ab29f56be0873dff50f95ba15ada',
 'result_token': '030796ecdf1fdf600f6751ca2bd2aee98c360aafcea56934',
 'update_tokens': ['4b138f6464315179e08e3d08e403b1da0be27ab3e478ece4',
  '61c9e2ddd1a053c99af4f5c09e224a43723a4dfd9dceafd7']}

Note: the analyst will need to pass on the project_id (the id of the linkage project) and one of the two update_tokens to each data provider.

Hash and Upload

At the moment both data providers have raw personally identiy information. We first have to generate CLKs from the raw entity information. We need: - anonlink-client provides a command line encoding tool. - the linkage schema from above - and a secret which is only known to Alice and Bob. (here: my_secret)

Full command line documentation can be fournd here, see clkhash documentation for further details on the encoding itself.

[8]:
!anonlink hash "{a_csv.name}" my_secret "{schema.name}" "{a_clks.name}"
!anonlink hash "{b_csv.name}" my_secret "{schema.name}" "{b_clks.name}"
CLK data written to /tmp/tmpgso1v_7b.json
CLK data written to /tmp/tmpamtsmico.json

Now the two clients can upload their data providing the appropriate upload tokens and the project_id. As with all commands in anonlink we can output help:

[9]:
!anonlink upload --help
Usage: anonlink upload [OPTIONS] CLK_JSON

  Upload CLK data to entity matching server.

  Given a json file containing hashed clk data as CLK_JSON, upload to the
  entity resolution service.

  Use "-" to read from stdin.

Options:
  --project TEXT                  Project identifier
  --apikey TEXT                   Authentication API key for the server.
  -o, --output FILENAME
  --blocks FILENAME               Generated blocks JSON file
  --server TEXT                   Server address including protocol. Default
                                  https://anonlink.easd.data61.xyz.

  --retry-multiplier INTEGER      <milliseconds> If receives a 503 from
                                  server, minimum waiting time before
                                  retrying. Default 100.

  --retry-exponential-max INTEGER
                                  <milliseconds> If receives a 503 from
                                  server, maximum time interval between
                                  retries. Default 10000.

  --retry-max-time INTEGER        <milliseconds> If receives a 503 from
                                  server, retry only within this period.
                                  Default 20000.

  -v, --verbose                   Script is more talkative
  --help                          Show this message and exit.

Alice uploads her data

[10]:
with NamedTemporaryFile('wt') as f:
    !anonlink upload \
        --project="{project_id}" \
        --apikey="{credentials['update_tokens'][0]}" \
        --server "{url}" \
        --output "{f.name}" \
        "{a_clks.name}"
    res = json.load(open(f.name))
    alice_receipt_token = res['receipt_token']

Every upload gets a receipt token. This token is required to access the results.

Bob uploads his data

[11]:
with NamedTemporaryFile('wt') as f:
    !anonlink upload \
        --project="{project_id}" \
        --apikey="{credentials['update_tokens'][1]}" \
        --server "{url}" \
        --output "{f.name}" \
        "{b_clks.name}"

    bob_receipt_token = json.load(open(f.name))['receipt_token']

Create a run

Now the project has been created and the CLK data has been uploaded we can carry out some privacy preserving record linkage. Try with a few different threshold values:

[12]:
with NamedTemporaryFile('wt') as f:
    !anonlink create \
        --project="{project_id}" \
        --apikey="{credentials['result_token']}" \
        --server "{url}" \
        --threshold 0.85 \
        --output "{f.name}"

    run_id = json.load(open(f.name))['run_id']

Results

Now after some delay (depending on the size) we can fetch the mask. Results can be fetched with the anonlink command line tool:

!anonlink results --server "{url}" \
    --project="{credentials['project_id']}" \
    --apikey="{credentials['result_token']}" --output results.txt

However for this tutorial we are going to wait for the run to complete using the anonlinkclient.rest_client then pull the raw results using the requests library:

[13]:
import requests
from anonlinkclient.rest_client import RestClient
from anonlinkclient.rest_client import format_run_status

from IPython.display import clear_output
[14]:
rest_client = RestClient(url)
for update in rest_client.watch_run_status(project_id, run_id, credentials['result_token'], timeout=300):
    clear_output(wait=True)
    print(format_run_status(update))
State: completed
Stage (3/3): compute output
[15]:
results = requests.get('{}/api/v1/projects/{}/runs/{}/result'.format(url, project_id, run_id), headers={'Authorization': credentials['result_token']}).json()
[16]:
mask = results['mask']

This mask is a boolean array that specifies where rows of permuted data line up.

[17]:
print(mask[:10])
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]

The number of 1s in the mask will tell us how many matches were found.

[18]:
sum([1 for m in mask if m == 1])
[18]:
4851

We also use requests to fetch the permutations for each data provider:

[19]:
alice_res = requests.get('{}/api/v1/projects/{}/runs/{}/result'.format(url, project_id, run_id), headers={'Authorization': alice_receipt_token}).json()
bob_res = requests.get('{}/api/v1/projects/{}/runs/{}/result'.format(url, project_id, run_id), headers={'Authorization': bob_receipt_token}).json()

Now Alice and Bob both have a new permutation - a new ordering for their data.

[20]:
alice_permutation = alice_res['permutation']
alice_permutation[:10]
[20]:
[1525, 1775, 4695, 1669, 1816, 2778, 1025, 2069, 4358, 4217]

This permutation says the first row of Alice’s data should be moved to position 308.

[21]:
bob_permutation = bob_res['permutation']
bob_permutation[:10]
[21]:
[2882, 3332, 3654, 300, 1949, 765, 4356, 1049, 2325, 4964]
[22]:
def reorder(items, order):
    """
    Assume order is a list of new index
    """
    neworder = items.copy()
    for item, newpos in zip(items, order):
        neworder[newpos] = item

    return neworder
[23]:
with open(a_csv.name, 'r') as f:
    alice_raw = f.readlines()[1:]
    alice_reordered = reorder(alice_raw, alice_permutation)

with open(b_csv.name, 'r') as f:
    bob_raw = f.readlines()[1:]
    bob_reordered = reorder(bob_raw, bob_permutation)

Now that the two data sets have been permuted, the mask reveals where the rows line up, and where they don’t.

[24]:
alice_reordered[:10]
[24]:
['rec-1977-org,aidan,morrison,2,broadsmith street,,cloverdale,2787,act,19140202,8821751\n',
 'rec-51-org,ella,blunden,37,freda gibson circuit,croyde,paddington,2770,sa,19401209,3593307\n',
 'rec-1151-org,courtney,gilbertson,35,maccallum circuit,barley hill,granville,2646,qld,19910105,5257049\n',
 'rec-2037-org,freya,mason,10,barnett close,dianella masonic village (cnr cornwell s,clifton springs,7301,wa,19241109,1571902\n',
 'rec-903-org,brianna,barisic,1502,haddon street,parish talowahl,launching place,3220,vic,19750703,5367822\n',
 'rec-2883-org,jackson,clarke,1,cargelligo street,summerset,bellevue hill,3835,qld,19571105,7943648\n',
 'rec-2856-org,chloe,setlhong,4,nunki place,yacklin,cronulla,6164,act,19950628,2829638\n',
 'rec-4831-org,caleb,thorpe,4,river street,,granville,2641,nsw,19590118,7916934\n',
 'rec-317-org,amber,nicolakopoulos,38,atkinson street,mount patrick,edgewater,2905,sa,19910707,9220881\n',
 'rec-2685-org,joel,lodge,200,steinwedel street,kmart p plaza,toowoomba,4012,wa,19710830,2655513\n']
[25]:
bob_reordered[:10]
[25]:
['rec-1977-dup-0,aidan,morrison,2,broadsmit hstreet,,clovedale,2787,act,19140202,8821751\n',
 'rec-51-dup-0,adam,,37,freda gibson circuit,cryode,paddington,2770,sa,19401209,3593307\n',
 'rec-1151-dup-0,courtney,dabinet,240,feathertopstreet,barley hill,tardun,2646,qld,19910105,5257049\n',
 'rec-2037-dup-0,beth,maso,10,barnett close,dianella masonic vlilage (cnr cornwell s,clifton springs,7320,wa,19241109,1571902\n',
 'rec-903-dup-0,barisic,brianna,1502,haddon street,parish talowahl,launching place,3220,vic,19750703,5367822\n',
 'rec-2883-dup-0,jackon,clareke,1,cargelligo street,summerset,bellevueh ill,3835,qdl,19571105,7943648\n',
 'rec-2856-dup-0,chloe,setlhong,4,nunki place,yacklin,cronulla,6614,act,19950628,2829638\n',
 'rec-4831-dup-0,cleb,thorpe,4,river street,,granville,2641,nsw,19590118,7916134\n',
 'rec-317-dup-0,amber,nicolakopoulos,38,atkinson street,mount patrick,edgewter,2905,sa,19910707,9220881\n',
 'rec-2685-dup-0,joe,lodgw,200,steinwedel street,kmart p plaza,toowoomba,4016,wa,19710830,2655513\n']

Accuracy

To compute how well the matching went we will use the first index as our reference.

For example in rec-1396-org is the original record which has a match in rec-1396-dup-0. To satisfy ourselves we can preview the first few supposed matches:

[26]:
for i, m in enumerate(mask[:10]):
    if m:
        entity_a = alice_reordered[i].split(',')
        entity_b = bob_reordered[i].split(',')
        name_a = ' '.join(entity_a[1:3]).title()
        name_b = ' '.join(entity_b[1:3]).title()

        print("{} ({})".format(name_a, entity_a[0]), '=?', "{} ({})".format(name_b, entity_b[0]))
Aidan Morrison (rec-1977-org) =? Aidan Morrison (rec-1977-dup-0)
Ella Blunden (rec-51-org) =? Adam  (rec-51-dup-0)
Courtney Gilbertson (rec-1151-org) =? Courtney Dabinet (rec-1151-dup-0)
Freya Mason (rec-2037-org) =? Beth Maso (rec-2037-dup-0)
Brianna Barisic (rec-903-org) =? Barisic Brianna (rec-903-dup-0)
Jackson Clarke (rec-2883-org) =? Jackon Clareke (rec-2883-dup-0)
Chloe Setlhong (rec-2856-org) =? Chloe Setlhong (rec-2856-dup-0)
Caleb Thorpe (rec-4831-org) =? Cleb Thorpe (rec-4831-dup-0)
Amber Nicolakopoulos (rec-317-org) =? Amber Nicolakopoulos (rec-317-dup-0)
Joel Lodge (rec-2685-org) =? Joe Lodgw (rec-2685-dup-0)

Metrics

If you know the ground truth — the correct mapping between the two datasets — you can compute performance metrics of the linkage.

Precision: The percentage of actual matches out of all found matches. (tp/(tp+fp))

Recall: How many of the actual matches have we found? (tp/(tp+fn))

[27]:
tp = 0
fp = 0

for i, m in enumerate(mask):
    if m:
        entity_a = alice_reordered[i].split(',')
        entity_b = bob_reordered[i].split(',')
        if entity_a[0].split('-')[1] == entity_b[0].split('-')[1]:
            tp += 1
        else:
            fp += 1
            #print('False positive:',' '.join(entity_a[1:3]).title(), '?', ' '.join(entity_b[1:3]).title(), entity_a[-1] == entity_b[-1])

print("Found {} correct matches out of 5000. Incorrectly linked {} matches.".format(tp, fp))
precision = tp/(tp+fp)
recall = tp/5000

print("Precision: {:.1f}%".format(100*precision))
print("Recall: {:.1f}%".format(100*recall))
Found 4851 correct matches out of 5000. Incorrectly linked 0 matches.
Precision: 100.0%
Recall: 97.0%
[28]:
# Deleting the project
!anonlink delete-project \
        --project="{credentials['project_id']}" \
        --apikey="{credentials['result_token']}" \
        --server="{url}"
Project deleted