Quick Start with a dummy cluster
Let’s attempt to setup the Tergite stack to run on a dummy cluster on your local machine.
We will not need an actual quantum computer. Take note, however, that the dummy cluster only returns 0 in its results.
Prerequisites
You may have to install these software if you don’t have them already installed.
Setup the Frontend
- Ensure you have docker is running.
docker --help
Note: for MacOS, start docker by running this command
open -a Docker
Note: for Windows, start docker by running this command
Start-Process "C:\Program Files\Docker\Docker\Docker Desktop.exe"
Note: for Linux, start docker by running this command
sudo systemctl start docker
Open another terminal
Clone the tergite-frontend repo
git clone https://github.com/tergite/tergite-frontend.git
- Enter the tergite-frontend folder
cd tergite-frontend
- Create an
mss-config.toml
file with visual studio code (or any other text editor).
code mss-config.toml
- Update the
mss-config.toml
with the following content
# mss-config.toml
# general configurations
[general]
# the port on which MSS is running
mss_port = 8002
# the port on which the websocket is running
ws_port = 6532
# environment reflect which environment the app is to run in.
environment = "development"
# the host the uvicorn runs on.
# During testing auth on 127.0.0.1, set this to "127.0.0.1". default: "0.0.0.0"
mss_host = "127.0.0.1"
[database]
# configurations for the database
name = "testing"
# database URI
# host.docker.internal resolves to the host's 127.0.0.1
# see https://stackoverflow.com/questions/31324981/how-to-access-host-port-from-docker-container#answer-43541732
url = "mongodb://host.docker.internal:27017"
[[backends]]
name = "loke"
# the URL where this backend is running
# host.docker.internal resolves to the host's 127.0.0.1
# see https://stackoverflow.com/questions/31324981/how-to-access-host-port-from-docker-container#answer-43541732
url = "http://host.docker.internal:8000"
[auth]
# turn auth OFF or ON, default=true
is_enabled = false
cookie_domain = "127.0.0.1"
cookie_name = "tergiteauth"
[[auth.clients]]
name = "github"
client_id = "some-github-obtained-client-id"
client_secret = "some-github-obtained-client-secret"
redirect_url = "http://127.0.0.1:8002/auth/app/github/callback"
client_type = "github"
email_regex = "^(john\\.doe|jane|aggrey)@example\\.com$"
email_domain = "example.com"
roles = ["admin", "user"]
[[auth.clients]]
name = "puhuri"
client_id = "some-puhuri-obtained-client-id"
client_secret = "some-puhuri-obtained-client-secret"
redirect_url = "http://127.0.0.1:8002/auth/app/puhuri/callback"
client_type = "openid"
email_regex = "^(john\\.doe|jane)@example\\.com$"
email_domain = "example.com"
roles = ["user"]
openid_configuration_endpoint = "https://proxy.acc.puhuri.eduteams.org/.well-known/openid-configuration"
# Puhuri synchronization
# Puhuri is a resource management platform for HPC systems, that is also to be used for Quantum Computer's
[puhuri]
# turn puhuri synchronization OFF or ON, default=true
is_enabled = false
- Create a
.env
file with visual studio code (or any other text editor).
code .env
- Update the
.env
with the following content
# .env
MSS_PORT=8002
# required
ENVIRONMENT="development"
MSS_V2_API_URL="http://127.0.0.1:8002/v2"
GRAFANA_LOKI_URL=http://127.0.0.1:3100/loki/api/v1/push
LOKI_LOGGER_ID=some-generic-id
# docker LOGGING_DRIVER can be journald, json-file, local etc.
LOGGING_DRIVER=json-file
# image versions:
# Note: If you ever want the images to be rebuilt,
# you have to change the app version numbers here
# before running "docker compose up"
MSS_VERSION=v0.0.1
DASHBOARD_VERSION=v0.0.1
PROMTAIL_VERSION=2.8.3
- For Linux: open MongoDB configurations file
code /etc/mongod.conf
- For Linux: Replace the contents that config file with the following:
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /var/lib/mongodb
# engine:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
# how the process runs
processManagement:
timeZoneInfo: /usr/share/zoneinfo
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
- For Linux: restart mongod service and make sure that it’s active
sudo service mongod restart
sudo service mongod status
- Open the Mongo compass application and connect to the default local mongo database
- Create a new mongo database called “testing” that contains a “backends” collection.
- Delete the old docker images of “tergite/tergite-mss”, “tergite/tergite-dashboard” from docker if they exist.
docker rmi -f tergite/tergite-mss:v0.0.1
docker rmi -f tergite/tergite-dashboard:v0.0.1
- To Run the services, use the
fresh-docker-compose.yml
.
docker compose -f fresh-docker-compose.yml up -d
- Remove any stale artefacts created during the docker build
docker system prune
Open your browser at
- http://localhost:8002 to see the MSS service
- http://localhost:3000 to see the Dashboard application
To view the status of the services, run:
docker compose -f fresh-docker-compose.yml ps
- To stop the services, run:
docker compose -f fresh-docker-compose.yml stop
- To remove stop the services and remove their containers also, run:
docker compose -f fresh-docker-compose.yml down
- To view logs of the docker containers to catch some errors, use:
docker compose -f fresh-docker-compose.yml logs -f
see more at https://docs.docker.com/reference/cli/docker/compose/logs/
- Ensure that the services are running. If they are not, restart them.
docker compose -f fresh-docker-compose.yml up -d
Setup the Backend
- Ensure you have conda installed. (You could simply have python +3.9 installed instead.)
- Ensure you have the Redis server running.
redis-server
- Open terminal.
- Clone the tergite-backend repo
git clone https://github.com/tergite/tergite-backend.git
- Create conda environment
conda create -n bcc -y python=3.9
conda activate bcc
- Install dependencies
cd tergite-backend
pip install -r requirements.txt
- Create an
.env
file with visual studio code (or any other text editor).
code .env
- Update
.env
file to have the following content
# .env
APP_SETTINGS=development
IS_AUTH_ENABLED=False
DEFAULT_PREFIX=loke
STORAGE_ROOT=/tmp
LOGFILE_DOWNLOAD_POOL_DIRNAME=logfile_download_pool
LOGFILE_UPLOAD_POOL_DIRNAME=logfile_upload_pool
JOB_UPLOAD_POOL_DIRNAME=job_upload_pool
JOB_PRE_PROC_POOL_DIRNAME=job_preproc_pool
JOB_EXECUTION_POOL_DIRNAME=job_execution_pool
# Main Service Server
MSS_MACHINE_ROOT_URL=http://localhost:8002
MSS_PORT=8002
# Backend Control computer
BCC_MACHINE_ROOT_URL=http://localhost:8000
BCC_PORT=8000
EXECUTOR_TYPE=quantify
- Create an
quantify-config.json
file with visual studio code (or any other text editor).
code quantify-config.json
- Update the
quantify-config.json
with the following content
{
"config_type": "quantify_scheduler.backends.qblox_backend.QbloxHardwareCompilationConfig",
"hardware_description": {
"cluster0": {
"instrument_type": "Cluster",
"ref": "internal",
"modules": {
"2": {
"instrument_type": "QCM_RF"
},
"16": {
"instrument_type": "QRM_RF"
}
}
}
},
"hardware_options": {
"modulation_frequencies": {
"q0:res-q0.ro": {"interm_freq": 100e6},
"q0:mw-q0.01": {"interm_freq": 100e6}
}
},
"connectivity": {
"graph": [
["cluster0.module2.complex_output_0", "q0:mw"],
["cluster0.module16.complex_output_0", "q0:res"]
]
}
}
- Create an
quantify-metadata.yml
file with visual studio code (or any other text editor).
code quantify-metadata.yml
- Update the
quantify-metadata.yml
with the following content
cluster0:
# instrument type
# Cluster or SPI-Rack
instrument_type: Cluster
# Ip address of the cluster
ip_address: 192.168.78.101
# whether to attempt connecting to the real cluster or to return dummy result for test purposes
is_dummy: true
# this would only be used to set up dummy cluster modules
modules:
"2":
instrument_type: QCM_RF
"16":
instrument_type: QRM_RF
- Create a
backend_config.toml
file with visual studio code (or any other text editor).
code backend_config.toml
- Update the
backend_config.toml
with the following content.
# backend_config.toml
[general_config]
name = "loke"
is_active = true
characterized = true
open_pulse = true
simulator = false
version = "1.0.0"
online_date = "2024-10-09T00:00:00"
num_qubits = 1
num_couplers = 0
num_resonators = 1
dt = 1e-9
dtm = 1e-9
[device_config]
discriminators = [ "lda" ]
qubit_ids = [ "q0"]
meas_map = [ [ 0 ], [ 1 ] ]
coordinates = [
[0, 0],
[1, 0]
]
qubit_parameters = [
"id",
"x_position",
"y_position",
"xy_drive_line",
"z_drive_line",
"frequency",
"pi_pulse_amplitude",
"pi_pulse_duration",
"pulse_type",
"pulse_sigma",
"t1_decoherence",
"t2_decoherence"
]
resonator_parameters = [
"id",
"x_position",
"y_position",
"readout_line",
"acq_delay",
"acq_integration_time",
"frequency",
"pulse_delay",
"pulse_duration",
"pulse_type",
"pulse_amplitude"
]
coupler_parameters = [
"id",
"frequency",
"frequency_detuning",
"anharmonicity",
"coupling_strength_02",
"coupling_strength_12",
"cz_pulse_amplitude",
"cz_pulse_dc_bias",
"cz_pulse_phase_offset",
"cz_pulse_duration_before",
"cz_pulse_duration_rise",
"cz_pulse_duration_constant",
"pulse_type"
]
[device_config.discriminator_parameters]
lda = [
"coef_0",
"coef_1",
"intercept"
]
[device_config.coupling_dict]
[gates.x]
coupling_map = [ [ 0, 1], [1, 0] ]
qasm_def = "gate x q { U(pi, 0, pi) q; }"
parameters = [ ]
- Create a
calibration.seed.toml
file with visual studio code (or any other text editor).
code calibration.seed.toml
- Update the
calibration.seed.toml
file with the following content.
[calibration_config]
[calibration_config.units.qubit]
frequency = "Hz"
t1_decoherence = "s"
t2_decoherence = "s"
anharmonicity = "Hz"
[calibration_config.units.readout_resonator]
acq_delay = "s"
acq_integration_time = "s"
frequency = "Hz"
pulse_delay = "s"
pulse_duration = "s"
pulse_amplitude = ""
pulse_type = ""
[calibration_config.units.coupler]
frequency = "Hz"
frequency_detuning = "Hz"
anharmonicity = "Hz"
coupling_strength_02 = "Hz"
coupling_strength_12 = "Hz"
cz_pulse_amplitude = ""
cz_pulse_dc_bias = ""
cz_pulse_phase_offset = "rad"
cz_pulse_duration_before = "s"
cz_pulse_duration_rise = "s"
cz_pulse_duration_constant = "s"
pulse_type = ""
[[calibration_config.qubit]]
id="q0"
t1_decoherence = 3.4e-5
t2_decoherence = 3.3e-5
frequency = 4511480043.556283
pi_pulse_amplitude = 0.17555712637424228
pi_pulse_duration = 5.6e-8
pulse_type = "Gaussian"
pulse_sigma = 7e-9
# -- Resonators --
[[calibration_config.readout_resonator]]
id="q0"
acq_delay = 5e-8
acq_integration_time = 1e-6
frequency = 7260080000.0
pulse_amplitude = 0.1266499392606423
pulse_delay = 0.0
pulse_duration = 9e-7
pulse_type = "Square"
# -- Discriminators --
[calibration_config.discriminators.lda.q0]
intercept = -38.4344477840827
coef_0 = -98953.87504155144
coef_1 = -114154.48696231026
- Run start script
./start_bcc.sh
- Open your browser at http://localhost:8000/docs to see the interactive API docs
Run an Experiment
Open another terminal
Create a new folder “tergite-test” and enter it
mkdir tergite-test
cd tergite-test
- Create conda environment and activate it
conda create -n tergite -y python=3.9
conda activate tergite
- Install qiskit and Tergite SDK by running the command below:
pip install qiskit
pip install tergite
- Create a file
main.py
with visual studio code (or any other text editor).
code main.py
- Update the
main.py
file with the following content:
# main.py
"""A sample script doing a very simple quantum operation"""
import time
import qiskit.circuit as circuit
import qiskit.compiler as compiler
from tergite.qiskit.providers import Job, Tergite
from tergite.qiskit.providers.provider_account import ProviderAccount
if __name__ == "__main__":
# the Tergite API URL
= "http://localhost:8002"
API_URL # The name of the Quantum Computer to use from the available quantum computers
= "loke"
BACKEND_NAME # the name of this service. For your own bookkeeping.
= "local"
SERVICE_NAME # the timeout in seconds for how long to keep checking for results
= 100
POLL_TIMEOUT
# create the Qiskit circuit
= circuit.QuantumCircuit(1)
qc 0)
qc.x(0)
qc.h(
qc.measure_all()
# create a provider
# provider account creation can be skipped in case you already saved
# your provider account to the `~/.qiskit/tergiterc` file.
# See below how that is done.
= ProviderAccount(service_name=SERVICE_NAME, url=API_URL)
account = Tergite.use_provider_account(account)
provider # to save this account to the `~/.qiskit/tergiterc` file, add the `save=True`
# provider = Tergite.use_provider_account(account, save=True)
# Get the Tergite backend in case you skipped provider account creation
# provider = Tergite.get_provider(service_name=SERVICE_NAME)
= provider.get_backend(BACKEND_NAME)
backend =1024)
backend.set_options(shots
# compile the circuit
= compiler.transpile(qc, backend=backend)
tc
# run the circuit
= backend.run(tc, meas_level=2, meas_return="single")
job: Job
# view the results
= 0
elapsed_time = None
result while result is None:
if elapsed_time > POLL_TIMEOUT:
raise TimeoutError(
f"result polling timeout {POLL_TIMEOUT} seconds exceeded"
)
1)
time.sleep(+= 1
elapsed_time = job.result()
result
print(result.get_counts())
- Execute the above script by running the command below.
python main.py
- It should return something like:
Results OK
{'0': 1024}
Note: We get only 0’s because we are using the dummy cluster from quantify scheduler