K-Nearest Neighbors (KNN) Classification

When you see something new, how can you guess its label or value by looking at similar cases you have already seen? The KNN algorithm tries to predict the label of a new item based on the labels of the K nearest neighbors. Apps can rely on it to find similar products, spot spam by comparing to past emails, match help tickets to known categories, and estimate numbers like a movie rating or house price. It works well as a first tool because it is easy to understand and quick to try.

For instance, given this grid of colored dots, if we'd like to predict what would be the most probable color for a newly added dot, we could look at the closest neighbors to decide that.

The Nearest Flavor

A curious bee explores a new orchard. It knows where some labeled trees are and what they taste like: apple or pear. When the bee lands somewhere, it wants the flavor of the closest known tree.

Your job is to help the bee decide.

The first line of the input contains a single integer n representing the number of known trees.

The next n lines each contain two floating-point numbers x y followed by a flavor word apple or pear, describing one tree’s position and flavor.

The next line contains a single integer q representing the number of landing positions to check.

The last q lines each contain two floating-point numbers x y for the bee’s landing position.

For each landing position, print the flavor of the single nearest known tree. If several known trees have exactly the same distance from the landing spot, print the one that appears the earliest in the input.

Input

Output

4
0 0 apple
2 0 pear
0 2 pear
-1 -1 apple
3
0.9 0.1
1.1 0
1 1

apple
pear
apple

3
-1 0 apple
3 0 pear
0 4 pear
2
2 1
2 5

pear
pear

2
0 0 pear
0 2 apple
3
0 1
0 1.001
1 1

pear
apple
pear

Constraints

Time limit: 2 seconds

Memory limit: 512 MB

Output limit: 1 MB

To check your solution you need to sign in
Sign in to continue