Understanding Performance Fluctuations in Quantum Processors

Posted by Paul V. Klimov, Research Scientist, Google AI Quantum Team

One area of research the Google AI Quantum team pursues is building quantum processors from superconducting electrical circuits, which are attractive candidates for implementing quantum bits (qubits). While superconducting circuits have demonstrated state-of-the-art performance and extensibility to modest processor sizes comprising tens of qubits, an outstanding challenge is stabilizing their performance, which can fluctuate unpredictably. Although performance fluctuations have been observed in numerous superconducting qubit architectures, their origin isn’t well understood, impeding progress in stabilizing processor performance.

In “Fluctuations of Energy-Relaxation Times in Superconducting Qubits” published in this week’s Physical Review Letters, we use qubits as probes of their environment to show that performance fluctuations are dominated by material defects. This was done by investigating qubits’ energy relaxation times (T1) — a popular performance metric that gives the length of time that it takes for a qubit to undergo energy-relaxation from its excited to ground state — as a function of operating frequency and time.

In measuring T1, we found that some qubit operating frequencies are significantly worse than others, forming energy-relaxation hot-spots (see figure below). Our research suggests that these hot spots are due to material defects, which are themselves quantum systems that can extract energy from qubits when their frequencies overlap (i.e. are “resonant”). Surprisingly, we found that the energy-relaxation hot spots are not static, but “move” on timescales ranging from minutes to hours. From these observations, we concluded that the dynamics of defects’ frequencies into and out of resonance with qubits drives the most significant performance fluctuations.

Left: A quantum processor similar to the one that was used to investigate qubit performance fluctuations. One qubit is highlighted in blue. Right: One qubit’s energy-relaxation time “T1” plotted as a function of it’s operating frequency and time. We see energy-relaxation hotspots, which our data suggest are due to material defects (black arrowheads). The motion of these hotspots into and out-of resonance with the qubit are responsible for the most significant energy-relaxation fluctuations. Note that these data were taken over a frequency band with an above-average density of defects.

These defects — which are typically referred to as two-level-systems (TLS) — are commonly believed to exist at the material interfaces of superconducting circuits. However, even after decades of research, their microscopic origin still puzzles researchers. In addition to clarifying the origin of qubit performance fluctuations, our data shed light on the physics governing defect dynamics, which is an important piece of this puzzle. Interestingly, from thermodynamics arguments we would not expect the defects that we see to exhibit any dynamics at all. Their energies are about one order of magnitude higher than the thermal energy available in our quantum processor, and so they should be “frozen out.” The fact that they are not frozen out suggests their dynamics may be driven by interactions with other defects that have much lower energies and can thus be thermally activated.

The fact that qubits can be used to investigate individual material defects – which are believed to have atomic dimensions, millions of times smaller than our qubits – demonstrates that they are powerful metrological tools. While it’s clear that defect research could help address outstanding problems in materials physics, it’s perhaps surprising that it has direct implications on improving the performance of today’s quantum processors. In fact, defect metrology already informs our processor design and fabrication, and even the mathematical algorithms that we use to avoid defects during quantum processor runtime. We hope this research motivates further work into understanding material defects in superconducting circuits.

Continua a leggere

Pubblicato in Senza categoria

Teaching the Google Assistant to be Multilingual

Posted by Johan Schalkwyk, VP and Ignacio Lopez Moreno, Engineer, Google Speech

Multilingual households are becoming increasingly common, with several sources [1][2][3] indicating that multilingual speakers already outnumber monolingual counterparts, and that this number will continue to grow. With this large and increasing population of multilingual users, it is more important than ever that Google develop products that can support multiple languages simultaneously to better serve our users.

Today, we’re launching multilingual support for the Google Assistant, which enables users to jump between two different languages across queries, without having to go back to their language settings. Once users select two of the supported languages, English, Spanish, French, German, Italian and Japanese, from there on out they can speak to the Assistant in either language and the Assistant will respond in kind. Previously, users had to choose a single language setting for the Assistant, changing their settings each time they wanted to use another language, but now, it’s a simple, hands-free experience for multilingual households.

The Google Assistant is now able to identify the language, interpret the query and provide a response using the right language without the user having to touch the Assistant settings.

Getting this to work, however, was not a simple feat. In fact, this was a multi-year effort that involved solving a lot of challenging problems. In the end, we broke the problem down into three discrete parts: Identifying Multiple Languages, Understanding Multiple Languages and Optimizing Multilingual Recognition for Google Assistant users.

Identifying Multiple Languages
People have the ability to recognize when someone is speaking another language, even if they do not speak the language themselves, just by paying attention to the acoustics of the speech (intonation, phonetic registry, etc). However, defining a computational framework for automatic spoken language recognition is challenging, even with the help of full automatic speech recognition systems1. In 2013, Google started working on spoken language identification (LangID) technology using deep neural networks [4][5]. Today, our state-of-the-art LangID models can distinguish between pairs of languages in over 2000 alternative language pairs using recurrent neural networks, a family of neural networks which are particularly successful for sequence modeling problems, such as those in speech recognition, voice detection, speaker recognition and others. One of the challenges we ran into was working with larger sets of audio — getting models that can automatically understanding multiple languages at scale, and hitting a quality standard that allowed those models to work properly.

Understanding Multiple Languages
To understand more than one language at once, multiple processes need to be run in parallel, each producing incremental results, allowing the Assistant not only to identify the language in which the query is spoken but also to parse the query to create an actionable command. For example, even for a monolingual environment, if a user asks to “set an alarm for 6pm”, the Google Assistant must understand that “set an alarm” implies opening the clock app, fulfilling the explicit parameter of “6pm” and additionally make the inference that the alarm should be set for today. To make this work for any given pair of supported languages is a challenge, as the Assistant executes the same work it does for the monolingual case, but now must additionally enable LangID, and not just one but two monolingual speech recognition systems simultaneously (we’ll explain more about the current two language limitation later in this post).

Importantly, the Google Assistant and other services that are referenced in the user’s query asynchronously generate real-time incremental results that need to be evaluated in a matter of milliseconds. This is accomplished with the help of an additional algorithm that ranks the transcription hypotheses provided by each of the two speech recognition systems using the probabilities of the candidate languages produced by LangID, our confidence on the transcription and the user’s preferences (such as favorite artists, for example).

Schematic of our multilingual speech recognition system used by the Google Assistant versus the standard monolingual speech recognition system. A ranking algorithm is used to select the best recognition hypotheses from two monolingual speech recognizer using relevant information about the user and the incremental langID results.

When the user stops speaking, the model has not only determined what language was being spoken, but also what was said. Of course, this process requires a sophisticated architecture that comes with an increased processing cost and the possibility of introducing unnecessary latency.

Optimizing Multilingual Recognition
To minimize these undesirable effects, the faster the system can make a decision about which language is being spoken, the better. If the system becomes certain of the language being spoken before the user finishes a query, then it will stop running the user’s speech through the losing recognizer and discard the losing hypothesis, thus lowering the processing cost and reducing any potential latency. With this in mind, we saw several ways of optimizing the system.

One use case we considered was that people normally use the same language throughout their query (which is also the language users generally want to hear back from the Assistant), with the exception of asking about entities with names in different languages. This means that, in most cases, focusing on the first part of the query allows the Assistant to make a preliminary guess of the language being spoken, even in sentences containing entities in a different language. With this early identification, the task is simplified by switching to a single monolingual speech recognizer, as we do for monolingual queries. Making a quick decision about how and when to commit to a single language, however, requires a final technological twist: specifically, we use a random forest technique that combines multiple contextual signals, such as the type of device being used, the number of speech hypotheses found, how often we receive similar hypotheses, the uncertainty of the individual speech recognizers, and how frequently each language is used.

An additional way we simplified and improved the quality of the system was to limit the list of candidate languages users can select. Users can choose two languages out of the six that our Home devices currently support, which will allow us to support the majority of our multilingual speakers. As we continue to improve our technology, however, we hope to tackle trilingual support next, knowing that this will further enhance the experience of our growing user base.

Bilingual to Trilingual
From the beginning, our goal has been to make the Assistant naturally conversational for all users. Multilingual support has been a highly-requested feature, and it’s something our team set its sights on years ago. But there aren’t just a lot of bilingual speakers around the globe today, we also want to make life a little easier for trilingual users, or families that live in homes where more than two languages are spoken.

With today’s update, we’re on the right track, and it was made possible by our advanced machine learning, our speech and language recognition technologies, and our team’s commitment to refine our LangID model. We’re now working to teach the Google Assistant how to process more than two languages simultaneously, and are working to add more supported languages in the future — stay tuned!

1 It is typically acknowledged that spoken language recognition is remarkably more challenging than text-based language identification where, relatively simple techniques based on dictionaries can do a good job. The time/frequency patterns of spoken words are difficult to compare, spoken words can be more difficult to delimit as they can be spoken without pause and at different paces and microphones may record background noise in addition to speech.

Continua a leggere

Pubblicato in Senza categoria

Die Entlarvung des Fett-Mythos

Im Bereich der Ernährung wimmelt es von falschen Ratschlägen. Einige davon können durchaus lebensgefährlich sein. 

Der Kult ums Essen zieht eigenartige wissenschaftliche Propheten an (cc_Foto: Joshua Rappeneker)

Der mit Abstand am meisten gelesene Artikel des Jahres 2017 im britischen Medizinjournal „The Lancet“ war die Veröffentlichung der Resultate der PURE-Studie. Diese bislang ehrgeizigste Ernährungsstudie untersuchte den bereits zehn Jahre andauernden Streit, was nun problematischer für die Gesundheit ist: Fett oder Kohlenhydrate?

Als Teilnehmer fungierten insgesamt 135.335 gesunde Menschen im Alter zwischen 35 und 70 Jahren, die in 18 verschiedenen Ländern angeworben wurden. Ihre Ernährungsgewohnheiten wurden über einen Zeitraum von durchschnittlich sieben Jahren penibel aufgezeichnet. Die Sponsoren der Studie hatten weder Einfluss auf das Design noch auf die Auswertung.

Die Resultate der Studie waren eindeutig: Die Gruppe mit dem höchsten Anteil an Kohlenhydraten hatte im Vergleich zu einem niedrigen Konsum von Kohlenhydraten ein um 28 Prozent höheres Gesamt-Sterberisiko. Genau umgekehrt verlief der Trend beim Fettkonsum: Ein hoher Fett-Konsum wirkte lebensverlängernd. Dies bezog sich sowohl auf die einfach- wie die mehrfach ungesättigten Fettsäuren. Ebenso jedoch auch auf die viel geschmähten gesättigten Fettsäuren, die meist aus tierischen Produkten stammen. Ein hoher Anteil erwies sich sogar als speziell protektiv gegen Schlaganfälle.

Die Resultate der PURE-Studie bedeuten eine neuerliche schwere Niederlage für die internationalen Ernährungs-Gesellschaften, die nach wie vor in ihren Richtlinien den Fettanteil auf weniger als 30 Prozent – den Anteil der gesättigten Fette auf weniger als 10 Prozent beschränken.

Bereits davor hatten zahlreiche kleinere Studien angedeutet, dass die Politik der „Light-Welle“ mit der Verteufelung von Fett und der Aufwertung des Zucker-Anteils in Lebensmitteln, die seit den 1980er Jahren das Angebot der Supermärkte bestimmt, fatale Konsequenzen hat. Ausgehend von den USA eroberten denaturierte, hoch verarbeitete Lebensmittel den weltweiten Markt. Sie konnten extrem billig produziert werden und boten der Industrie eine hohe Gewinnspanne. Und mit Hilfe der Ernährungswissenschaften konnten sie nun sogar einen gesundheitlichen Bonus vorgaukeln.

Der Industrie verpflichtet
Kaum ein Bereich der Wissenschaft hat sich derart schamlos ihren Geldgebern angedient wie die Ernährungsexperten. Seit Jahrzehnten versteht sich ihre Elite als Handlanger der Nahrungsmittelindustrie. Manche ihrer Richtlinien zur “gesunden Ernährung” sind schlicht gemeingefährlich. Die katastrophalen Folgen ihrer Ratschläge sieht man am besten in den USA, wo die Kumpanei zwischen Wissenschaft, Industrie und gekaufter Politik am besten organisiert ist: im Land der Fetten.

Innerhalb von 30 Jahren ist ein ganzes Land – von international halbwegs normalen Werten zur Mitte der 80er Jahre – gewaltig in die Breite gegangen. Jedes fünfte High-School-Kid passt nur noch in XXL-Klamotten und watschelt wie eine Ente vom Schulbus zur Haustür. Und wer sich gewichtsmäßig einmal auf der Überholspur befindet, kommt kaum noch herunter. Die Zahl der Amerikaner mit extremer Fettsucht – einem BMI (Body Maß Index) über 40 – hat sich vervierfacht. Immer mehr US-Krankenhäuser erwägen den Erwerb veterinärmedizinischer Untersuchungsgeräte oder fahren mit ihren Problempatienten gleich in den Zoo. Und dort wird der Zwei-Zentner-Mann dann per Kran in die Röhre des CT gehievt – sobald das kranke Rhinozeros mit der Untersuchung fertig ist.

Kohlehydrate-Mast nach wissenschaftlichem Ratschlag (Foto: pixabay.com)

In Gang gesetzt wurde dieser Trend mit einer weltweiten Kampagne gegen Fett. Besonders des Teufels war tierisches Fett und die in Butter oder Schweineschmalz vorherrschenden “gesättigten Fettsäuren”. In den Ernährungsratgebern wurde empfohlen, großflächig auf Kohlenhydrat-reiche Nahrungsmittel auszuweichen und “herzgesunde” Margarine zu verwenden. Hätten die “Experten” etwas genauer recherchiert, wären sie auf die alten Rezepte der Landwirte gestoßen, die seit langem wussten, dass ihre Schweine und Ochsen am raschesten mit einer Kohlenhydrate-Mast zunehmen. Und so geschah es dann auch bei den Menschen.

Millionen Todesfälle durch künstliches Fett
Das “böse” tierische Fett wurde durch billige Pflanzenöle von Mais, Raps und Sonnenblume ersetzt. Dumm nur, dass dies flüssig und deshalb schwer zu verarbeiten war. Doch auch hier kam Hilfe von der Wissenschaft: Wird das Öl auf über 200 Grad erhitzt und dann unter hohem Druck mit Wasserstoff “beschossen”, so werden die Fettsäuren gesättigt und damit gehärtet. Das bequeme daran ist, dass man diesen Prozess jederzeit stoppen und den Kunden die geeignete Konsistenz liefern kann. Leider entstehen dabei – als eine Art Betriebsunfall der Härtung – künstliche Fette, so genannte Transfette. Sie können vom Organismus nur schlecht abgebaut werden und verursachen Entzündungen in den Blutgefäßen mit allen dramatischen Folgen. Noch bis in die 90er Jahre bestand Margarine bis zu einem Drittel aus Transfetten. Erst in den letzten Jahren wurde der Anteil an Transfetten in der EU streng begrenzt. Doch der angerichtete Schaden ist nicht mehr gut zu machen: “Wahrscheinlich”, so Walter Willett von der Harvard University in Boston, “sind weltweit Millionen von Menschen vorzeitig gestorben, weil unsere Nahrung zu viele Transfette enthält.”

Doch viele andere – ähnlich problematische – Ratschläge der Ernährungswissenschaft sind nach wie vor in Kraft. Etwa die lukrative Hetze gegen “gefährliches Cholesterin”, mit der über den Verkauf entsprechender “Cholesterinsenker” Milliardenumsätze erzielt werden. Oder die Behauptung, dass Bauchfett besonders gefährlich wäre. Oder der Ratschlag, dass helles Fleisch (z.B. Geflügel) gesünder sei als rotes Fleisch (z.B. Rind).

Immer häufiger zeigt es sich, dass die Studien, auf denen diese Aussagen basieren, gespickt mit Fehlern waren. Werden sie über gut gemachte Arbeiten wiederholt, kommen ganz andere Resultate raus. Wie beispielsweise die groß angelegte europäische Studie zum Zusammenhang von Ernährung und Krebs, die von der Epidemiologin Sabine Rohrmann und ihrem Team der Universität Zürich koordiniert wurde. In den Resultaten zeigte sich kein Unterschied, ob jemand mehr Geflügel- oder Rindfleisch konsumierte. Sehr wohl einen Nachteil hatten hingegen Personen mit einem dauerhaft hohen Konsum von verarbeiteten Fleischprodukten. Bei ihnen war sowohl das Risiko von Krebs als auch von Herzkrankheiten signifikant höher. Die Wissenschaftler raten deshalb dazu, im täglichen Schnitt nicht mehr als 20 Gramm verarbeitete Fleischprodukte (z.B. Hot Dogs, fertige Fleisch-Saucen, Hühner-Nuggets, Streichwurst, etc.) zu essen.

Wie soll man nun aber essen, was ist wirklich gesund? Aus Studien mit Hundertjährigen weiß man, dass sie vor allem eines gemeinsam haben: Ihr Blutzuckerspiegel ist gering und sie haben verblüffend niedrige Insulinwerte. Es macht also Sinn, die Ernährung in diese Richtung auszuwählen und zu optimieren. 

Näheres dazu im ausführlichen Ernährungs-Kapitel meines Buches “Der Methusalem-Code“.

Continua a leggere

Pubblicato in Senza categoria

Introducing a New Framework for Flexible and Reproducible Reinforcement Learning Research

Posted by Pablo Samuel Castro, Research Software Developer and Marc G. Bellemare, Research Scientist, Google Brain Team

Reinforcement learning (RL) research has seen a number of significant advances over the past few years. These advances have allowed agents to play games at a super-human level — notable examples include DeepMind’s DQN on Atari games along with AlphaGo and AlphaGo Zero, as well as Open AI Five. Specifically, the introduction of replay memories in DQN enabled leveraging previous agent experience, large-scale distributed training enabled distributing the learning process across multiple workers, and distributional methods allowed agents to model full distributions, rather than simply their expected values, to learn a more complete picture of their world. This type of progress is important, as the algorithms yielding these advances are additionally applicable for other domains, such as in robotics (see our recent work on robotic manipulation and teaching robots to visually self-adapt).

Quite often, developing these kind of advances requires quickly iterating over a design — often with no clear direction — and disrupting the structure of established methods. However, most existing RL frameworks do not provide the combination of flexibility and stability that enables researchers to iterate on RL methods effectively, and thus explore new research directions that may not have immediately obvious benefits. Further, reproducing the results from existing frameworks is often too time consuming, which can lead to scientific reproducibility issues down the line.

Today we’re introducing a new Tensorflow-based framework that aims to provide flexibility, stability, and reproducibility for new and experienced RL researchers alike. Inspired by one of the main components in reward-motivated behaviour in the brain and reflecting the strong historical connection between neuroscience and reinforcement learning research, this platform aims to enable the kind of speculative research that can drive radical discoveries. This release also includes a set of colabs that clarify how to use our framework.

Ease of Use
Clarity and simplicity are two key considerations in the design of this framework. The code we provide is compact (about 15 Python files) and is well-documented. This is achieved by focusing on the Arcade Learning Environment (a mature, well-understood benchmark), and four value-based agents: DQN, C51, a carefully curated simplified variant of the Rainbow agent, and the Implicit Quantile Network agent, which was presented only last month at the International Conference on Machine Learning (ICML). We hope this simplicity makes it easy for researchers to understand the inner workings of the agent and to quickly try out new ideas.

We are particularly sensitive to the importance of reproducibility in reinforcement learning research. To this end, we provide our code with full test coverage; these tests also serve as an additional form of documentation. Furthermore, our experimental framework follows the recommendations given by Machado et al. (2018) on standardizing empirical evaluation with the Arcade Learning Environment.

It is important for new researchers to be able to quickly benchmark their ideas against established methods. As such, we are providing the full training data of the four provided agents, across the 60 games supported by the Arcade Learning Environment, available as Python pickle files (for agents trained with our framework) and as JSON data files (for comparison with agents trained in other frameworks); we additionally provide a website where you can quickly visualize the training runs for all provided agents on all 60 games. Below we show the training runs for our 4 agents on Seaquest, one of the Atari 2600 games supported by the Arcade Learning Environment.

The training runs for our 4 agents on Seaquest. The x-axis represents iterations, where each iteration is 1 million game frames (4.5 hours of real-time play); the y-axis is the average score obtained per play. The shaded areas show confidence intervals from 5 independent runs.

We are also providing the trained deep networks from these agents, the raw statistics logs, as well as the Tensorflow event files for plotting with Tensorboard. These can all be found in the downloads section of our site.

Our hope is that our framework’s flexibility and ease-of-use will empower researchers to try out new ideas, both incremental and radical. We are already actively using it for our research and finding it is giving us the flexibility to iterate quickly over many ideas. We’re excited to see what the larger community can make of it. Check it out at our github repo, play with it, and let us know what you think!

This project was only possible thanks to several collaborations at Google. The core team includes Marc G. Bellemare, Pablo Samuel Castro, Carles Gelada, Subhodeep Moitra and Saurabh Kumar. We also extend a special thanks to Sergio Guadamarra, Ofir Nachum, Yifan Wu, Clare Lyle, Liam Fedus, Kelvin Xu, Emilio Parisoto, Hado van Hasselt, Georg Ostrovski and Will Dabney, and the many people at Google who helped us test it out.

Continua a leggere

Pubblicato in Senza categoria

Introduction to hard anodizing

The main purpose of hard anodizing is to form a thick and dense oxide layer with a high wear resistance with thicknesses above 25 µm (1 mil). A dense oxide layer is an oxide layer with narrow pores and very thick cell walls.

The figures show the differences in structure of type II Anodizing compare to type III Anodizing. The anodized layer is seen from the top and down into the porous hexagonal structure.

The structure of the porous aluminium oxide layer is highly ordered as explained in an earlier post with the great slide from a hard coat presentation by Mr. Leonid Lerner from Sanford Process Corp. at International Hard Anodizing Association symposium in Las Vegas.

The slide show the two different directions of external stresses in the anodized oxide layer depending on if we test it or use it in normal applications.

According to MIL-A-8625F shall type III coatings be a result of treating aluminium and aluminium alloy electrolytically in a sulfuric acid based electrolyte to produce a uniform, hard anodic coating, often called Hard Coat in US and Hard Anodized.

This can be done by a low electrolyte temperature and a low concentration of the electrolyte in order to slow down chemical dissolution of the oxide layer. Production of very thick coatings will usually involve very high voltages and/or high current densities, which lead to high local temperatures therefore agitation of the electrolyte is most important.

According to MIL-A-8625F the hard-anodized coatings are characterized by their layer thickness and the coating weight of the formed layer. These types of coatings are named Type III coatings. Usually these coatings are used in the engineering industry for components such as pistons, cylinders and hydraulic gear, where a severe abrasive wear is found.

Apart from the wear resistance of the oxide layer, the hard-anodized oxide layer has other properties. Properties such as low friction and non-stick are very important. These hard coatings are usually unsealed to maintain a high wear resistance, but can be impregnated with different materials such as waxes and silicone.

If sealed in hot water the wear resistance will decrease with 20 – 50 % depending of the sealing process used.

If the corrosion resistance is the most important property for the surface, a sealing will enhance this property. The sealing will normally be in hot water or dichromate, which increases the corrosion resistance remarkably.

If you find this article useful and you would like to know more please contact me
 [email protected] 


Continua a leggere

Pubblicato in Senza categoria