Quantum Machine Learning Part 3

This article shows a problem which can be solved by Quantum Machine Learning. We would like to see you solve this problem yourself and share your solution and approach in the forums. The first 10 correct solutions posted will win a Topcoder T-shirt.

Variational quantum classifier

In the last two articles, we looked at quantum algorithms such as variational quantum eigen solver and quantum classifiers. In this article, we look at variational quantum classifier.

The quantum algorithms require a fixed number of qubits and quantum gates. The quantum gates and qubits need to be robust against errors. The variational circuits where the parameters of gates are learnt is helpful for machine learning. The input features are encoded as vectors in the amplitudes of a quantum system. A quantum circuit consists of parametrized single and two-qubit gates.  The circuit has a single-qubit measurement to classify the inputs. A quantum-classical training scheme is used for the variational circuit. The quantum classifier performs well on classical benchmark datasets.

Dataset and Problem

The Iris flower data set was first presented by Ronald Fisher. Fisher was a biologist and statistician. This dataset is the best known in the pattern recognition space. The iris data set has 50 samples. These samples are from three species of Iris such as Iris setosa, Iris virginica and Iris versicolor. Four features from the data set are the length and the width of the sepals and petals, in centimeters. The problem below shows the iris data set classifier. The goal is to use the data for training and improve accuracy for predicting the class of iris. The iris data file has petal length, petal width, sepal length, sepal width and species for classes Iris setosa and virginica.

The goal is to look at the implement the #TBDs in the code. The solution needs to iterate and improve the accuracy in training and validation of the cost. The first 10 correct entries will win a Topcoder T-shirt. The entries can be posted in the forum link.

Prerequisites:

  1. You need to set up Python3.5 to run the code samples below. You can download from this link.

Problem

import pennylane as qml
from pennylane import numpy as np
from pennylane.optimize import NesterovMomentumOptimizer

dev = qml.device('default.qubit', wires=2)


def GetAngles(xval):
    """Calculate rotation angles for state preparation. 
    """

    beta0_val = 2 * np.arcsin(np.sqrt(xval[1]) ** 2 / np.sqrt(xval[0] ** 2 + xval[1] ** 2 + 1e-12) )
    beta1_val = 2 * np.arcsin(np.sqrt(xval[3]) ** 2 / np.sqrt(xval[2] ** 2 + xval[3] ** 2 + 1e-12) )
    beta2_val = 2 * np.arcsin(np.sqrt(xval[2] ** 2 + xval[3] ** 2) / np.sqrt(xval[0] ** 2 + xval[1] ** 2 + xval[2] ** 2 + xval[3] ** 2))

    return np.array([beta2_val, -beta1_val / 2, beta1_val / 2, -beta0_val / 2, beta0_val / 2])


def PrepareState(arr):
    """Encodes a 4-d vector
    into the amplitudes of 2 qubits.
    """

    qml.RY(arr[0], wires=0)

    qml.CNOT(wires=[0, 1])
    qml.RY(arr[1], wires=1)
    qml.CNOT(wires=[0, 1])
    qml.RY(arr[2], wires=1)

    qml.PauliX(wires=0)
    qml.CNOT(wires=[0, 1])
    qml.RY(arr[3], wires=1)
    qml.CNOT(wires=[0, 1])
    qml.RY(arr[4], wires=1)
    qml.PauliX(wires=0)


def GetLayer(Warr):
    """  layer for the variational classifier.
    """

    qml.Rot(Warr[0, 0], Warr[0, 1], Warr[0, 2], wires=0)
    qml.Rot(Warr[1, 0], Warr[1, 1], Warr[1, 2], wires=1)

    qml.CNOT(wires=[0, 1])
    

@qml.qnode(dev)
def GetCircuit(weight, angle=None):
    """Retrieves circuit for the variational classifier."""

    PrepareState(angle)
    
    for W in weight:
        GetLayer(W)

    return qml.expval.PauliZ(0)


def GetVariationalClassifier(var, angle=None):
    """gets the variational classifier."""

    #TBD

    return GetCircuit(weight, angle=angle) + bias_val


def GetSquareLoss(label, prediction):
    loss_val = 0
    for l, p in zip(label, prediction):
        loss_val = loss_val + (l - p) ** 2
    loss_val = loss_val / len(label)

    return loss_val


def GetAccuracy(label, prediction):

    loss_val = 0
    for l, p in zip(label, prediction):
        if abs(l - p) < 1e-5:
            loss_val = loss_val + 1
    loss_val = loss_val / len(label)

    return loss_val


def GetCost(weight, feature, label):
    """ error function which is minimized."""

    #TBD

    return GetSquareLoss(label, prediction)


iris_data = np.loadtxt("iris_classes.txt")
Xcoord = iris_data[:, 0:2]

pad = 0.3 * np.ones((len(Xcoord), 1))
X_padding = np.c_[np.c_[Xcoord, pad], np.zeros((len(Xcoord), 1)) ] 

norm = np.sqrt(np.sum(X_padding ** 2, -1))
X_norm = (X_padding.T / norm).T  

ftrs = np.array([GetAngles(x) for x in X_norm])   

Ycoord = iris_data[:, -1]

np.random.seed(0)
num_data = len(Ycoord)
num_train = int(0.75 * num_data)
index = np.random.permutation(range(num_data))
feats_train = ftrs[index[:num_train]]
Y_train = Ycoord[index[:num_train]]
ftrs_val = ftrs[index[num_train:]]
Y_val = Ycoord[index[num_train:]]

qubits = 2
layers = 6
varinit = (0.01 * np.random.randn(layers, qubits, 3), 0.0)

opt = NesterovMomentumOptimizer(0.01)
batsize = 5

var = varinit
for it in range(60):

    bat_index = np.random.randint(0, num_train, (batsize, ))
    features_train_batch = feats_train[bat_index]
    Ytrainbat = Y_train[bat_index]
    var = opt.step(lambda v: GetCost(v, features_train_batch, Ytrainbat), var)

    predict_train = [np.sign(GetVariationalClassifier(var, angle=f)) for f in feats_train]
    predict_val = [np.sign(GetVariationalClassifier(var, angle=f)) for f in ftrs_val]

    accuracy_train = GetAccuracy(Y_train, predict_train)
    accuracy_val = GetAccuracy(Y_val, predict_val)

    print("Iterations: {:5d} | Cost Features: {:0.7f} | Accuracy training: {:0.7f} | Accuracy validation: {:0.7f} "
          "".format(it+1, GetCost(var, ftrs, Ycoord), accuracy_train, accuracy_val))

Instructions for Running the Code

pip install pennylane
 
python quantum_classifier.py