Entity Service Permutation Output

This tutorial demonstrates the workflow for private record linkage using the entity service. Two parties Alice and Bob have a dataset of personally identifiable information (PII) of several entities. They want to learn the linkage of corresponding entities between their respective datasets with the help of the entity service and an independent party, the Analyst.

The chosen output type is permuatations, which consists of two permutations and one mask.

Who learns what?

After the linkage has been carried out Alice and Bob will be able to retrieve a permutation - a reordering of their respective data sets such that shared entities line up.

The Analyst - who creates the linkage project - learns the mask. The mask is a binary vector that indicates which rows in the permuted data sets are aligned. Note this reveals how many entities are shared.

Steps

These steps are usually run by different companies - but for illustration all is carried out in this one file. The participants providing data are Alice and Bob, and the Analyst acting the integration authority.

## Check Connection

If you’re connecting to a custom entity service, change the address here.
[1]:
import os
url = os.getenv("SERVER", "https://testing.es.data61.xyz")
print(f'Testing anonlink-entity-service hosted at {url}')
Testing anonlink-entity-service hosted at https://testing.es.data61.xyz
[2]:
!clkutil status --server "{url}"
{"project_count": 2109, "rate": 8216626, "status": "ok"}

## Data preparation

Following the clkhash tutorial we will use a dataset from the recordlinkage library. We will just write both datasets out to temporary CSV files.

[3]:
from tempfile import NamedTemporaryFile
from recordlinkage.datasets import load_febrl4
[4]:
dfA, dfB = load_febrl4()

a_csv = NamedTemporaryFile('w')
a_clks = NamedTemporaryFile('w', suffix='.json')
dfA.to_csv(a_csv)
a_csv.seek(0)

b_csv = NamedTemporaryFile('w')
b_clks = NamedTemporaryFile('w', suffix='.json')
dfB.to_csv(b_csv)
b_csv.seek(0)

dfA.head(3)

[4]:
given_name surname street_number address_1 address_2 suburb postcode state date_of_birth soc_sec_id
rec_id
rec-1070-org michaela neumann 8 stanley street miami winston hills 4223 nsw 19151111 5304218
rec-1016-org courtney painter 12 pinkerton circuit bega flats richlands 4560 vic 19161214 4066625
rec-4405-org charles green 38 salkauskas crescent kela dapto 4566 nsw 19480930 4365168

The linkage schema must be agreed on by the two parties. A hashing schema instructs clkhash how to treat each column for generating CLKs. A detailed description of the hashing schema can be found in the api docs. We will ignore the columns ‘rec_id’ and ‘soc_sec_id’ for CLK generation.

[5]:
schema = NamedTemporaryFile('wt')
[6]:
%%writefile {schema.name}
{
  "version": 1,
  "clkConfig": {
    "l": 1024,
    "k": 30,
    "hash": {
      "type": "doubleHash"
    },
    "kdf": {
      "type": "HKDF",
      "hash": "SHA256",
        "info": "c2NoZW1hX2V4YW1wbGU=",
        "salt": "SCbL2zHNnmsckfzchsNkZY9XoHk96P/G5nUBrM7ybymlEFsMV6PAeDZCNp3rfNUPCtLDMOGQHG4pCQpfhiHCyA==",
        "keySize": 64
    }
  },
  "features": [
    {
      "identifier": "rec_id",
      "ignored": true
    },
    {
      "identifier": "given_name",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "weight": 1 }
    },
    {
      "identifier": "surname",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "weight": 1 }
    },
    {
      "identifier": "street_number",
      "format": { "type": "integer" },
      "hashing": { "ngram": 1, "positional": true, "weight": 0.5, "missingValue": {"sentinel": ""} }
    },
    {
      "identifier": "address_1",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "weight": 0.5 }
    },
    {
      "identifier": "address_2",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "weight": 0.5 }
    },
    {
      "identifier": "suburb",
      "format": { "type": "string", "encoding": "utf-8" },
      "hashing": { "ngram": 2, "weight": 0.5 }
    },
    {
      "identifier": "postcode",
      "format": { "type": "integer", "minimum": 100, "maximum": 9999 },
      "hashing": { "ngram": 1, "positional": true, "weight": 0.5 }
    },
    {
      "identifier": "state",
      "format": { "type": "string", "encoding": "utf-8", "maxLength": 3 },
      "hashing": { "ngram": 2, "weight": 1 }
    },
    {
      "identifier": "date_of_birth",
      "format": { "type": "integer" },
      "hashing": { "ngram": 1, "positional": true, "weight": 1, "missingValue": {"sentinel": ""} }
    },
    {
      "identifier": "soc_sec_id",
      "ignored": true
    }
  ]
}
Overwriting /tmp/tmpu8y0vxd4

## Create Linkage Project

The analyst carrying out the linkage starts by creating a linkage project of the desired output type with the Entity Service.

[7]:
creds = NamedTemporaryFile('wt')
print("Credentials will be saved in", creds.name)

!clkutil create-project --schema "{schema.name}" --output "{creds.name}" --type "permutations" --server "{url}"
creds.seek(0)

import json
with open(creds.name, 'r') as f:
    credentials = json.load(f)

project_id = credentials['project_id']
credentials
Credentials will be saved in /tmp/tmpngtrvblo
Project created
[7]:
{'project_id': '539a612e09bbac7fc5178f7798e15dfc310bc06878ff25fe',
 'result_token': '2a52a9729facd2fd4e547b8029697e3ab7a464c32f3ada7e',
 'update_tokens': ['47f701f76e06e2283f68dfddfb15da4b56bb05a43d6c5acb',
  '0b2228ff49ef9caeb29744f9ce97b39280873919a60a8765']}

Note: the analyst will need to pass on the project_id (the id of the linkage project) and one of the two update_tokens to each data provider.

## Hash and Upload

At the moment both data providers have raw personally identiy information. We first have to generate CLKs from the raw entity information. We need: - the clkhash library - the linkage schema from above - and two secret passwords which are only known to Alice and Bob. (here: horse and staple)

Please see clkhash documentation for further details on this.

[8]:
!clkutil hash "{a_csv.name}" horse staple "{schema.name}" "{a_clks.name}"
!clkutil hash "{b_csv.name}" horse staple "{schema.name}" "{b_clks.name}"
generating CLKs: 100%|█| 5.00k/5.00k [00:01<00:00, 3.31kclk/s, mean=765, std=37.1]
CLK data written to /tmp/tmpy3s8f407.json
generating CLKs: 100%|█| 5.00k/5.00k [00:01<00:00, 3.53kclk/s, mean=756, std=43.3]
CLK data written to /tmp/tmp0fdoothg.json

Now the two clients can upload their data providing the appropriate upload tokens and the project_id. As with all commands in clkhash we can output help:

[9]:
!clkutil upload --help
Usage: clkutil upload [OPTIONS] CLK_JSON

  Upload CLK data to entity matching server.

  Given a json file containing hashed clk data as CLK_JSON, upload to the
  entity resolution service.

  Use "-" to read from stdin.

Options:
  --project TEXT         Project identifier
  --apikey TEXT          Authentication API key for the server.
  --server TEXT          Server address including protocol
  -o, --output FILENAME
  -v, --verbose          Script is more talkative
  --help                 Show this message and exit.

Alice uploads her data

[10]:
with NamedTemporaryFile('wt') as f:
    !clkutil upload \
        --project="{project_id}" \
        --apikey="{credentials['update_tokens'][0]}" \
        --server "{url}" \
        --output "{f.name}" \
        "{a_clks.name}"
    res = json.load(open(f.name))
    alice_receipt_token = res['receipt_token']

Every upload gets a receipt token. This token is required to access the results.

Bob uploads his data

Now the project has been created and the CLK data has been uploaded we can carry out some privacy preserving record linkage. Try with a few different threshold values:

Now after some delay (depending on the size) we can fetch the mask. This can be done with clkutil:

!clkutil results --server "{url}" \
    --project="{credentials['project_id']}" \
    --apikey="{credentials['result_token']}" --output results.txt

However for this tutorial we are going to use the Python requests library:

[14]:
import requests
import clkhash.rest_client

from IPython.display import clear_output
[15]:
for update in clkhash.rest_client.watch_run_status(url, project_id, run_id, credentials['result_token'], timeout=300):
    clear_output(wait=True)
    print(clkhash.rest_client.format_run_status(update))
State: completed
Stage (3/3): compute output
[17]:
results = requests.get('{}/api/v1/projects/{}/runs/{}/result'.format(url, project_id, run_id), headers={'Authorization': credentials['result_token']}).json()
[18]:
mask = results['mask']

This mask is a boolean array that specifies where rows of permuted data line up.

[19]:
print(mask[:10])
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]

The number of 1s in the mask will tell us how many matches were found.

[20]:
sum([1 for m in mask if m == 1])
[20]:
4858

We also use requests to fetch the permutations for each data provider:

[21]:
alice_res = requests.get('{}/api/v1/projects/{}/runs/{}/result'.format(url, project_id, run_id), headers={'Authorization': alice_receipt_token}).json()
bob_res = requests.get('{}/api/v1/projects/{}/runs/{}/result'.format(url, project_id, run_id), headers={'Authorization': bob_receipt_token}).json()

Now Alice and Bob both have a new permutation - a new ordering for their data.

[22]:
alice_permutation = alice_res['permutation']
alice_permutation[:10]
[22]:
[4659, 4076, 1898, 868, 3271, 2486, 1078, 3774, 2656, 4324]

This permutation says the first row of Alice’s data should be moved to position 308.

[23]:
bob_permutation = bob_res['permutation']
bob_permutation[:10]
[23]:
[3074, 1996, 4523, 500, 3384, 1115, 746, 1165, 2999, 2204]
[24]:
def reorder(items, order):
    """
    Assume order is a list of new index
    """
    neworder = items.copy()
    for item, newpos in zip(items, order):
        neworder[newpos] = item

    return neworder
[25]:
with open(a_csv.name, 'r') as f:
    alice_raw = f.readlines()[1:]
    alice_reordered = reorder(alice_raw, alice_permutation)

with open(b_csv.name, 'r') as f:
    bob_raw = f.readlines()[1:]
    bob_reordered = reorder(bob_raw, bob_permutation)

Now that the two data sets have been permuted, the mask reveals where the rows line up, and where they don’t.

[26]:
alice_reordered[:10]
[26]:
['rec-4746-org,gabrielle,fargahry-tolba,10,northbourne avenue,pia place,st georges basin,2011,vic,19640424,7326839\n',
 'rec-438-org,alison,hearn,4,macdonnell street,cabrini medical centre,adelaide,2720,vic,19191230,2937695\n',
 'rec-3902-org,,oreilly,,paul coe crescent,wylarah,tuart hill,3219,vic,19500925,4201497\n',
 'rec-920-org,benjamin,clarke,122,archibald street,locn 1487,nickol,2535,nsw,19010518,1978760\n',
 'rec-2152-org,emiily,fitzpatrick,,aland place,keralland,rowville,2219,vic,19270130,1148897\n',
 'rec-3434-org,alex,clarke,12,fiveash street,emerald garden,homebush,2321,nsw,19840627,7280280\n',
 'rec-4197-org,talan,stubbs,21,augustus way,ashell,croydon north,3032,wa,19221022,7550622\n',
 'rec-2875-org,luke,white,31,outtrim avenue,glenora farm,flinders bay,2227,sa,19151010,6925269\n',
 'rec-2559-org,emiily,binns,24,howell place,sec 142 hd rounsevell,ryde,2627,wa,19941108,8919080\n',
 'rec-2679-org,thomas,brain,108,brewster place,geelong grove,eight mile plains,2114,qld,19851127,8873329\n']
[27]:
bob_reordered[:10]
[27]:
['rec-4746-dup-0,gabrielle,fargahry-tolba,11,northbourne avenue,pia place,st georges basin,2011,vic,19640424,7326839\n',
 'rec-438-dup-0,heatn,alison,4,macdonnell street,cabrini medicalb centre,adelaide,2270,vic,19191230,2937695\n',
 'rec-3902-dup-0,,oreilly,,paul coe cerscent,wylrah,tuart hill,3219,vic,19500925,4201497\n',
 'rec-920-dup-0,scott,clarke,122,archibald street,locn 1487,nickol,2553,nsw,19010518,1978760\n',
 'rec-2152-dup-0,megna,fitzpatrick,,aland place,keralalnd,rowville,2219,vic,19270130,1148897\n',
 'rec-3434-dup-0,alex,clarke,12,,emeral dgarden,homebush,2321,nsw,19840627,7280280\n',
 'rec-4197-dup-0,talan,stubbs,21,binns street,ashell,croydon north,3032,wa,19221022,7550622\n',
 'rec-2875-dup-0,luke,white,31,outtrim aqenue,glenora farm,flinedrs bay,2227,sa,19151010,6925269\n',
 'rec-2559-dup-0,binns,emiilzy,24,howell place,sec 142 hd rounsevell,ryde,2627,wa,19941108,8919080\n',
 'rec-2679-dup-0,dixon,thomas,108,brewster place,geelong grove,eight mile plains,2114,qld,19851127,8873329\n']

Accuracy

To compute how well the matching went we will use the first index as our reference.

For example in rec-1396-org is the original record which has a match in rec-1396-dup-0. To satisfy ourselves we can preview the first few supposed matches:

[28]:
for i, m in enumerate(mask[:10]):
    if m:
        entity_a = alice_reordered[i].split(',')
        entity_b = bob_reordered[i].split(',')
        name_a = ' '.join(entity_a[1:3]).title()
        name_b = ' '.join(entity_b[1:3]).title()

        print("{} ({})".format(name_a, entity_a[0]), '=?', "{} ({})".format(name_b, entity_b[0]))
Gabrielle Fargahry-Tolba (rec-4746-org) =? Gabrielle Fargahry-Tolba (rec-4746-dup-0)
Alison Hearn (rec-438-org) =? Heatn Alison (rec-438-dup-0)
 Oreilly (rec-3902-org) =?  Oreilly (rec-3902-dup-0)
Benjamin Clarke (rec-920-org) =? Scott Clarke (rec-920-dup-0)
Emiily Fitzpatrick (rec-2152-org) =? Megna Fitzpatrick (rec-2152-dup-0)
Alex Clarke (rec-3434-org) =? Alex Clarke (rec-3434-dup-0)
Talan Stubbs (rec-4197-org) =? Talan Stubbs (rec-4197-dup-0)
Luke White (rec-2875-org) =? Luke White (rec-2875-dup-0)
Emiily Binns (rec-2559-org) =? Binns Emiilzy (rec-2559-dup-0)
Thomas Brain (rec-2679-org) =? Dixon Thomas (rec-2679-dup-0)

Metrics

If you know the ground truth — the correct mapping between the two datasets — you can compute performance metrics of the linkage.

Precision: The percentage of actual matches out of all found matches. (tp/(tp+fp))

Recall: How many of the actual matches have we found? (tp/(tp+fn))

[29]:
tp = 0
fp = 0

for i, m in enumerate(mask):
    if m:
        entity_a = alice_reordered[i].split(',')
        entity_b = bob_reordered[i].split(',')
        if entity_a[0].split('-')[1] == entity_b[0].split('-')[1]:
            tp += 1
        else:
            fp += 1
            #print('False positive:',' '.join(entity_a[1:3]).title(), '?', ' '.join(entity_b[1:3]).title(), entity_a[-1] == entity_b[-1])

print("Found {} correct matches out of 5000. Incorrectly linked {} matches.".format(tp, fp))
precision = tp/(tp+fp)
recall = tp/5000

print("Precision: {:.1f}%".format(100*precision))
print("Recall: {:.1f}%".format(100*recall))
Found 4858 correct matches out of 5000. Incorrectly linked 0 matches.
Precision: 100.0%
Recall: 97.2%