Entity Service: Multiparty linkage demo

This notebook is a demonstration of the multiparty linkage capability that has been implemented in the Entity Service.

We show how five parties may upload their hashed data to the Entity Service to obtain a multiparty linkage result. This result identifies each entity across all datasets in which they are included.

[1]:
import csv
import itertools
import os
import pandas as pd

import requests

Each party has a dataset of the following form:

[2]:
pd.read_csv('data/dataset-1.csv', index_col='id').head()
[2]:
givenname surname dob gender city income phone number
id
0 tara hilton 27-08-1941 male canberra 84052.973 08 2210 0298
3 saJi vernre 22-12-2972 mals perth 50104.118 02 1090 1906
7 sliver paciorek NaN mals sydney 31750.893 NaN
9 ruby george 09-05-1939 male sydney 135099.875 07 4698 6255
10 eyrinm campbell 29-1q-1983 male perth NaN 08 299y 1535

Comparing the beginning of the first dataset to the second, we can see that the quality of the data is not very good. There are a lot of spelling mistakes and missing information. Let’s see how well the entity service does with linking those entities.

[3]:
pd.read_csv('data/dataset-2.csv', index_col='id').head()
[3]:
givenname surname dob gender city income phone number
id
3 zali verner 22-12-1972 male perth 50104.118 02 1090 1906
4 samuel tremellen 21-12-1923 male melbourne 159316.091 03 3605 9336
5 amy lodge 16-01-1958 male canberra 70170.456 07 8286 9372
7 oIji pacioerk 10-02-1959 mal3 sydney 31750.893 04 4220 5949
10 erin kampgell 29-12-1983 make perth 331476.598 08 2996 1445

Check the status of the Entity Service

Ensure that it is running and that we have the correct version. Multiparty support was introduced in version 1.11.0.

[4]:
SERVER = os.getenv("SERVER", "https://anonlink.easd.data61.xyz")
PREFIX = f"{SERVER}/api/v1"
print(requests.get(f"{PREFIX}/status").json())
print(requests.get(f"{PREFIX}/version").json())
{'project_count': 839, 'rate': 550410, 'status': 'ok'}
{'anonlink': '0.12.5', 'entityservice': 'v1.13.0-beta2', 'python': '3.8.2'}

Create a new project

We create a new multiparty project for five parties by specifying the number of parties and the output type (currently only the group output type supports multiparty linkage). Retain the project_id, so we can find the project later. Also retain the result_token, so we can retrieve the results (careful: anyone with this token has access to the results). Finally, the update_tokens identify the five data data providers and permit them to upload CLKs.

[5]:
project_info = requests.post(
    f"{PREFIX}/projects",
    json={
        "schema": {},
        "result_type": "groups",
        "number_parties": 5,
        "name": "example project"
    }
).json()
project_id = project_info["project_id"]
result_token = project_info["result_token"]
update_tokens = project_info["update_tokens"]

print("project_id:", project_id)
print()
print("result_token:", result_token)
print()
print("update_tokens:", update_tokens)
project_id: 35697a8223f98ed4112488ae3c87e8134d169a364d35e2e7

result_token: 075faf5822cfbe3abe4ce47510a7d3190f518768282f83a7

update_tokens: ['26a30750ba4b7124bc3fd8a36e57bf6211af3fda960c6fb0', '27d17421a4f01c61e4b6ec782486c550da93d350a8d2dbf1', '5c0f98cd55acd48c99bd7f2ddd26af46f6afd31095c7a8a1', 'dcc87296257cb13c9ac3da1e0905c1448a5d51bc9f1fbec3', '9937b6e17abe516e9364cbc88a22593ef78ccdf3d045a907']

Upload the hashed data

This is where each party uploads their CLKs into the service. Here, we do the work of all five data providers inside this for loop. In a deployment scenario, each data provider would be uploading their own CLKs using their own update token.

These CLKs are already hashed using clkhash (with this linkage schema), so for each data provider, we just need to upload their corresponding hash file.

[6]:
for i, token in enumerate(update_tokens, start=1):
    with open(f"data/clks-{i}.json") as f:
        r = requests.post(
            f"{PREFIX}/projects/{project_id}/clks",
            data=f,
            headers={
                "Authorization": token,
                "content-type": "application/json"
            }
        )
    print(f"Data provider {i}: {r.text}")
Data provider 1: {
  "message": "Updated",
  "receipt_token": "be6ab1dd0833283ec78ce829f7276b53926588d86c503534"
}

Data provider 2: {
  "message": "Updated",
  "receipt_token": "74a3f479949d5bb2537c5cab01db9d8d08bf0f7aad991c4d"
}

Data provider 3: {
  "message": "Updated",
  "receipt_token": "5a88765376836d57e37489e9f205e0d5bb8d9abd6d9cfc7a"
}

Data provider 4: {
  "message": "Updated",
  "receipt_token": "e005523285d21cfec2927d17050faffb1c249a5b8784f2a4"
}

Data provider 5: {
  "message": "Updated",
  "receipt_token": "e2c10b8f9f5f6ea90978d9cf0f3b25700fbd222658b704bb"
}

Begin a run

The data providers have uploaded their CLKs, so we may begin the computation. This computation may be repeated multiple times, each time with different parameters. Each such repetition is called a run. The most important parameter to vary between runs is the similarity threshold. Two records whose similarity is above this threshold will be considered to describe the same entity.

Here, we perform one run. We (somewhat arbitrarily) choose the threshold to be 0.8.

[7]:
r = requests.post(
    f"{PREFIX}/projects/{project_id}/runs",
    headers={
        "Authorization": result_token
    },
    json={
        "threshold": 0.8
    }
)
run_id = r.json()["run_id"]

Check the status

Let’s see whether the run has finished (‘state’ is ‘completed’)!

[8]:
r = requests.get(
    f"{PREFIX}/projects/{project_id}/runs/{run_id}/status",
    headers={
        "Authorization": result_token
    }
)
r.json()
[8]:
{'current_stage': {'description': 'waiting for CLKs',
  'number': 1,
  'progress': {'absolute': 5,
   'description': 'number of parties already contributed',
   'relative': 1.0}},
 'stages': 3,
 'state': 'created',
 'time_added': '2020-04-03T01:20:55.141739+00:00',
 'time_started': None}

Now after some delay (depending on the size) we can fetch the results. Waiting for completion can be achieved by directly polling the REST API using requests, however for simplicity we will just use the watch_run_status function provided in anonlinkclient.rest_client.

[9]:
from IPython.display import clear_output
from anonlinkclient.rest_client import RestClient, format_run_status

rest_client = RestClient(SERVER)
for update in rest_client.watch_run_status(project_id, run_id, result_token, timeout=300):
    clear_output(wait=True)
    print(format_run_status(update))
State: completed
Stage (3/3): compute output

Retrieve the results

We retrieve the results of the linkage. As we selected earlier, the result is a list of groups of records. Every record in such a group belongs to the same entity and consists of two values, the party id and the row index.

The last 20 groups look like this.

[10]:
r = requests.get(
    f"{PREFIX}/projects/{project_id}/runs/{run_id}/result",
    headers={
        "Authorization": result_token
    }
)
groups = r.json()
groups['groups'][-20:]
[10]:
[[[0, 781], [4, 780]],
 [[2, 3173], [4, 3176], [3, 3163], [0, 3145], [1, 3161]],
 [[2, 1617], [3, 1620]],
 [[0, 444], [1, 423]],
 [[4, 391], [1, 409]],
 [[1, 347], [4, 332], [2, 353], [0, 352]],
 [[1, 3171], [4, 3185], [0, 3153], [2, 3182], [3, 3172]],
 [[2, 1891], [4, 1906], [3, 1889]],
 [[0, 2139], [4, 2147]],
 [[0, 1206], [4, 1205], [2, 1206]],
 [[2, 2726], [4, 2710], [3, 2722]],
 [[3, 2040], [4, 2059], [2, 2059]],
 [[1, 899], [4, 924], [0, 923]],
 [[0, 2482], [1, 2494], [4, 2483], [3, 2488], [2, 2509]],
 [[3, 741], [4, 736], [2, 749], [1, 722]],
 [[1, 1587], [4, 1638]],
 [[1, 1157], [4, 1209]],
 [[1, 2027], [3, 740]],
 [[1, 1260], [2, 1311], [3, 1281], [4, 1326]],
 [[1, 1323], [2, 1362], [4, 1384], [0, 1396]]]

To sanity check, we print their records’ corresponding PII:

[11]:
def load_dataset(i):
    dataset = []
    with open(f"data/dataset-{i}.csv") as f:
        reader = csv.reader(f)
        next(reader)  # ignore header
        for row in reader:
            dataset.append(row[1:])
    return dataset

datasets = list(map(load_dataset, range(1, 6)))

for group in itertools.islice(groups["groups"][-20:], 20):
    for (i, j) in group:
        print(i, datasets[i][j])
    print()
0 ['kain', 'mason', '09-07-1932', 'male', 'sydnev', '119435.710', '08 8537 7448']
4 ['kaim', 'iiiazon', '09-07-1932', 'male', 'sydnev', '119445.720', '08 8638 7448']

2 ['harriyon', 'micyelmor', '21-04-1971', 'male', 'pert1>', '291889.942', '04 5633 5749']
4 ['harri5on', 'micyelkore', '21-04-1971', '', 'pertb', '291880.942', '04 5633 5749']
3 ['hariso17', 'micelmore', '21-04-1971', 'male', 'pertb', '291880.042', '04 5633 5749']
0 ['harrison', 'michelmore', '21-04-1981', 'malw', 'preth', '291880.942', '04 5643 5749']
1 ['harris0n', 'michelmoer', '21-04-1971', '', '', '291880.942', '04 5633 5749']

2 ['lauren', 'macgowan', '08-01-1960', 'male', '', '43779.493', '03 6533 7075']
3 ['lauren', 'macgowan', '08-01-1950', 'male', 'sydney', '43770.493', '03 6532 7075']

0 ['joshai', 'browne', '30-10-2904', '', 'melbounfe', '522585.205', '03 7150 7587']
1 ['joshua', 'browne', '30-10-2004', 'female', 'melbourne', '522585.205', '03 7150 7587']

4 ['feliciti', 'green', '23-02-1909', 'male', '', '183205.299', '08 4794 9870']
1 ['feljcitv', 'greery', '23-02-1998', 'male', '', '183205.299', '08 4794 9970']

1 ['alannah', 'gully', '15-04-1903', 'make', 'meobourne', '134518.814', '04 5104 4572']
4 ['alana', 'gully', '15-04-1903', 'male', 'melbourne', '134518.814', '04 5104 4582']
2 ['alama', 'gulli', '15-04-1903', 'mald', 'melbourne', '134518.814', '04 5104 5582']
0 ['alsna', 'gullv', '15-04-1903', 'male', '', '134518.814', '04 5103 4582']

1 ['madison', 'crosswell', '11-06-1990', 'male', 'perth', '151347.559', '03 0936 9125']
4 ['madisori', 'crossw4ll', '11-96-1990', 'male', 'perth', '151347.559', '03 0926 9125']
0 ['madispn', 'crossvvell', '11-06-2990', 'male', 'bperth', '151347.559', '03 0936 9125']
2 ['badisoj', 'cross2ell', '11-06-1990', 'malw', 'eprth', '151347.559', '03 0936 9125']
3 ['mad9son', 'crosswell', '11-06-1990', '', '', '151347.559', '03 0937 9125']

2 ['harley', 'krin', '29-05-1967', 'maoe', 'melbourne', '120938.846', '08 8095 4760']
4 ['harley', 'green', '29-05-1967', 'male', 'melbourne', '120937.846', '08 8096 4760']
3 ['harley', 'gfeen', '29-04-1967', 'mslr', 'melbourne', '120937.856', '08 8096 4760']

0 ['nicho1as', 'mak0nw', '06-06-1977', 'male', '', '91255.089', '08 2404 9176']
4 ['nicol', 'maano', '06-06-1977', '', '', '91155.089', '08 2404 9176']

0 ['james', 'lavender', '08-02-2000', 'male', 'canberra', '88844.369', '02 5862 9827']
4 ['jaiiies', 'lvender', '08-02-2900', 'male', 'canberra', '88844.369', '02 5862 982u']
2 ['jimmy', 'lavendre', '08-02-2000', 'malw', 'canberra', '88844.369', '02 5863 9827']

2 ['ara', 'hite', '01-05-1994', 'femzle', 'canberra', '29293.820', '03 0641 9597']
4 ['tara', 'white', '01-05-1984', 'female', 'canberra', '29293.820', '03 0641 9597']
3 ['tara', 'white', '01-05-1974', 'femzle', '', '29293.820', '03 0641 0697']

3 ['spericer', 'pize', '03-04-1983', 'male', 'canberra', '', '03 5691 5970']
4 ['spencer', 'paize', '03-04-1983', 'male', 'canberra', '56328.357', '03 6691 5970']
2 ['spenfer', 'pai2e', '03-04-1893', 'male', 'can1>erra', '56328.357', '03 6691 5970']

1 ['isbaella', 'darby-cocks', '14-09-1921', 'male', 'pergh', '87456.184', '03 0678 5513']
4 ['isabella', 'darby-cocks', '14-09-1921', 'male', 'perth', '87456.194', '03 0679 5513']
0 ['isabeloa', 'darby-cocks', '14-09-2921', 'make', 'perth', '87456.194', '04 0678 6513']

0 ['jarrod', 'brone', '09-08-1967', 'mal3', 'perth', '1075t6.775', '08 2829 1110']
1 ['jarrod', 'browne', '09-08-1967', 'male', 'perth', '107556.775', '08 2820 1110']
4 ['jarrod', 'brownb', '09-08-1967', 'mqle', 'pertb', '107556.775', '08 2820 2110']
3 ['jarr0d', 'brown', '09-08-1967', 'male', '', '107546.775', '08 2820 1110']
2 ['jarr0d', 'borwne', '09-08-1067', 'male', 'pertb', '107556.775', '08 2820 1110']

3 ['marko', 'matthews', '11-04-1992', 'male', 'melbourne', '106467.902', '03 1460 7673']
4 ['marko', 'matthews', '11-0r-1992', 'maoe', 'melhourne', '106467.992', '03 1460 7673']
2 ['marko', 'matthevvs', '11-94-1992', 'mals', 'melbourne', '', '03 1460 7673']
1 ['makro', 'matthews', '11-04-1992', '', 'emlbourne', '106467.903', '03 1460 7673']

1 ['nkiki', 'spers', '10-02-2007', 'fenale', '', '156639.106', '07 9447 1767']
4 ['nikkui', 'pezes', '10-02-20p7', 'female', '', '156639.106', '07 9447 1767']

1 ['roby', 'felepa', '25-19-1959', 'male', 'aclonerra', '85843.631', '07 5804 7920']
4 ['robert', 'felepa', '25-10-1959', 'male', 'can1>erra', '85842.631', '07 5804 7929']

1 ['shai', 'dixon', '24-09-1979', 'female', 'melbourne', '609473.955', '08 4533 9404']
3 ['mia', 'dixon', '24-09-1979', 'female', 'melbourne', '1198037.556', '08 3072 7335']

1 ['livia', 'riaj', '13-03-1907', 'malw', 'melbovrne', '73305.107', '07 3846 2530']
2 ['livia', 'ryank', '13-03-1907', 'malw', 'melbuorne', '73305.107', '07 3946 2630']
3 ['ltvia', 'ryan', '13-03-1907', 'maoe', 'melbourne', '73305.197', '07 3046 2530']
4 ['livia', 'ryan', '13-03-1907', 'male', 'melbourne', '73305.107', '07 3946 2530']

1 ['brock', 'budge', '27-09-1960', 'male', 'perth', '209428.166', '02 5106 4056']
2 ['brocck', 'bud9e', '27-09-1960', 'male', 'pertb', '208428.166', '02 5106 4056']
4 ['brock', 'budge', '27-09-1970', 'male', '', '209428.167', '02 5206 4056']
0 ['brock', 'bwudge', '27-09-2860', '', 'perth', '209428.166', '02 5106 3056']

Despite the high amount of noise in the data, the Anonlink Entity Service was able to produce a fairly accurate matching. However note Mia Galbraith and Talia Galbraith are most likely not an actual match.

We may be able to improve on this results by fine-tuning the hashing schema or by changing the threshold.

Delete the project

[12]:
r = requests.delete(
    f"{PREFIX}/projects/{project_id}",
    headers={
        "Authorization": result_token
    }
)
print(r.status_code)
204