Next Round Of Codes

Last week’s mega blog on the BEC actually missed a few nuggets so I wanted to clean that up here.  There were some additional pieces that I found newsworthy.  As always Dr. Tom Culp provides an update with extreme value as does Urmilla Jokhu-Sowell.   First amongst the many nuggets from Tom was discussion on where the energy codes will go next… he listed 4 bullet points that I found relevant:

– More high performance thermal breaks

– More 4thsurface low-e, or triple glazing

– More warm edge spacer

– Lower SHGC triple silvers in the south

For some of us, the thought of these things are exciting and for some of us the thought in nauseating.  For those sick to their stomach, my advice is you have a bit of time to prep because it’s coming, so you may want to prep.  One thought I had is I wonder if moves like this will grow the Vacuum IG side in the commercial industry?  Regardless these are items to have on the radar.

Urmilla’s presentation did break down how much was going on in the tech side of NGA/GANA and with the merged group some things will certainly change, but what will not is the desire to make sure the items that affect our industry the most- serious ones- will be addressed.  I am excited to see how Urmilla and the technical side evolve and advance with the new set up.


–  I did run into Courtney Little of Ace Glass at BEC but found out after that he was just elected as President of American Subcontractors Association (ASA)- Courtney will be a great force there and he’s always been a tremendous person for insight for our industry.  Congrats Courtney and hope to still see you at glass events still even with your new responsibilities. 

–  Saw an interesting article  this week that was promoting the “Tesla of Housing.” Basically this was something compared to the groundbreaking like Tesla vehicle and features a contractor specifically focused on advancing the energy efficiency in the housing market. The approach noted is basically “passivhaus” which is not new but still very good and important.  Resistance in the US has always been pretty strong, since we love our “McMansions” so we’ll see if this developer can break that trend.
–  I do love the show Flip or Flop on HGTV though I do believe the pricing that they assign to things is usually woefully low.  Especially on shower enclosures.  (Please anyone who’s worked on that show with glass weigh in) I find it very interesting to get into the minds of the players.  And another show in that genre is “Fixer Upper” and from time to time Dustin Anderson of Anderson Glass plays a role.  In the new episode that aired last Tuesday, Dustin had to fabricate a huge glass wall and install it on the 3rd floor of an apartment building.  The wall was weight bearing as well and Dustin and his team had to move materials up the old fashioned way- through the stairwells!  Overall it was interesting to watch the players view what Dustin and his team did with amazement.  Glass and glazing is so cool and so many don’t realize it.  Kudos to Dustin for showing off what we do to the masses.

–  Last this week… my favorite show…The Americans returns for its final season on the 28th.  Lots of loose ends to tie up and I simply can’t wait!


Sometimes cool things happen from mistakes online and with texts.
Tracking of students by their ID cards- not surprised in the least
Wonderful.  A mystery disease is on the horizon. 

As mentioned above the final season of The Americans comes on March 28th.  Here’s the preview…

Continua a leggere

Pubblicato in Senza categoria

Mezco Toyz One:12 Collective 1/12th scale Selina Kyle Catwoman 16cm tall action figure

Mezco welcomes Gotham’s most notorious cat burglar, Catwoman, into the One:12 collective.

The One:12 Collective Catwoman figure flaunts three exquisitely detailed head portraits: a flirty smile, a hissing snarl, and an unmasked Selina Kyle portrait. In addition to Catwoman’s classic form-fitting suit, she comes complete with a whip, waist belt with an opening tool kit, removable backpack, and goggles that fit both masked heads.

Morally ambiguous, stealthy, and agile, Selina Kyle AKA Catwoman, started out as a cat burglar to survive and protect those closest to her. Utilizing a cat mask, this furtive femme fatale is known to steal from Gotham City’s rich and corrupt. Do not cross this cat or bad luck is sure to follow.

Mezco Toyz One:12 Collective 1/12th scale Catwoman female action figure features: Three (3) head portraits -
Smiling head, Snarling head, Unmasked head | Approximately 16cm tall One:12 Collective body with over 30 points of articulation | Hand painted authentic detailing | Eight (8) interchangeable hands: pair of fists, whip holding hand (R), “come here” hand, pair of posing hands

Scroll down to see all the pictures.
Click on them for bigger and better views.

Costume: Tailored stretch catsuit, Waist belt with tool kit, Mid-calf work boots

Accessories: pair of goggles (Fit both masked heads), backpack (removable), whip, waist belt with cat burglar tool kit (non-removable)

Each One:12 Collective Catwoman figure is packaged in a collector friendly box, designed with collectors in mind. There are no twist ties for easy in-and-out of package display. Continua a leggere

Pubblicato in Senza categoria

Using Deep Learning to Facilitate Scientific Image Analysis

Posted by Samuel Yang, Research Scientist, Google Accelerated Science Team

Many scientific imaging applications, especially microscopy, can produce terabytes of data per day. These applications can benefit from recent advances in computer vision and deep learning. In our work with biologists on robotic microscopy applications (e.g., to distinguish cellular phenotypes) we’ve learned that assembling high quality image datasets that separate signal from noise is a difficult but important task. We’ve also learned that there are many scientists who may not write code, but who are still excited to utilize deep learning in their image analysis work. A particular challenge we can help address involves dealing with out-of-focus images. Even with the autofocus systems on state-of-the-art microscopes, poor configuration or hardware incompatibility may result in image quality issues. Having an automated way to rate focus quality can enable the detection, troubleshooting and removal of such images.

Deep Learning to the Rescue
In “Assessing Microscope Image Focus Quality with Deep Learning”, we trained a deep neural network to rate the focus quality of microscopy images with higher accuracy than previous methods. We also integrated the pre-trained TensorFlow model with plugins in Fiji (ImageJ) and CellProfiler, two leading open source scientific image analysis tools that can be used with either a graphical user interface or invoked via scripts.

A pre-trained TensorFlow model rates focus quality for a montage of microscope image patches of cells in Fiji (ImageJ). Hue and lightness of the borders denote predicted focus quality and prediction uncertainty, respectively.

Our publication and source code (TensorFlow, Fiji, CellProfiler) illustrate the basics of a machine learning project workflow: assembling a training dataset (we synthetically defocused 384 in-focus images of cells, avoiding the need for a hand-labeled dataset), training a model using data augmentation, evaluating generalization (in our case, on unseen cell types acquired by an additional microscope) and deploying the pre-trained model. Previous tools for identifying image focus quality often require a user to manually review images for each dataset to determine a threshold between in and out-of-focus images; our pre-trained model requires no user set parameters to use, and can rate focus quality more accurately as well. To help improve interpretability, our model evaluates focus quality on 84×84 pixel patches which can be visualized with colored patch borders.

What about Images without Objects?
An interesting challenge we overcame was that there are often “blank” image patches with no objects, a scenario where no notion of focus quality exists. Instead of explicitly labeling these “blank” patches and teaching our model to recognize them as a separate category, we configured our model to predict a probability distribution across defocus levels, allowing it to learn to express uncertainty (dim borders in the figure) for these empty patches (e.g. predict equal probability in/out-of-focus).

What’s Next?
Deep learning-based approaches for scientific image analysis will improve accuracy, reduce manual parameter tuning and may reveal new insights. Clearly, the sharing and availability of datasets and models, and implementation into tools that are proven to be useful within respective communities, will be important for widespread adoption.

We thank Claire McQuin, Allen Goodman, Anne Carpenter of the Broad Institute and Kevin Eliceiri of the University of Wisconsin at Madison for assistance with CellProfiler and Fiji integration, respectively.

Continua a leggere

Pubblicato in Senza categoria

Hot Toys Thor: Ragnarok 1/6th scale Tom Hiddleston as Loki 12-inch Collectible Figure


“Your savior has arrived!”

Marvel Studios’ Thor: Ragnarok has been topping box offices around the world since it was released in theaters and received numerous acclaim from audience and critics worldwide! Being portrayed as a villain in the past, the God of Mischief – Loki is returning to unite with his brother, the God of Thunder Thor to fight against the dangerously powerful Hela in order to save their home and the people of Asgard.

Fans have been anticipating the reveal of a new collectible figure of this particular character and today Hot Toys is extremely delighted to officially present the new 1/6th scale Loki collectible figure from the Marvel Studios blockbuster Thor: Ragnarok.

Sophisticatedly crafted based on the appearance of Tom Hiddleston as Loki in the movie, the collectible figure features a newly developed head sculpt with astonishing likeness, brand new leather-like costume with green colored cape, Loki’s iconic helmet, an array of weapons and accessories which include two highly detailed daggers, the Tesseract, a Surtur’s skull with fired up effect, and a specially designed figure stand.


Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Hot Toys MMS472 1/6th scale Loki Collectible Figure specially features: Newly developed head sculpt with authentic and detailed likeness of Tom Hiddleston as Loki in Thor: Ragnarok | Movie-accurate facial expression with detailed skin texture | Black color hair sculpture | Approximately 31 cm tall Body with over 30 points of articulations | Seven (7) pieces of interchangeable hands including: pair of fists, pair of open hands, pair of dagger holding hands, piece of Tesseract holding right hand

Costume: green-colored cape, black leather-like vest, long sleeve leather-like black shirt with green and bronze-colored armor, pair of leather-like green and black colored pants with patterns, pair of boot covers, pair of black colored boots

Weapons: Two (2) daggers (can be placed into the sheath), Two (2) dagger sheaths

Accessories: Loki’s gold helmet (wearable to the head sculpt), Tesseract with base, translucent orange colored Surtur’s Skull, Specially-designed figure stand with character nameplate and the movie logo


Continua a leggere

Pubblicato in Senza categoria

Sideshow Collectibles 25-inch (63.5cm) tall The Joker Premium Format™ Figure Pre-order


“All I’ve ever wanted is to have a good time…and to annoy Batman whenever possible, of course.”

Sideshow is proud to present The Joker Premium Format™ Figure, making mayhem for Gotham wherever he goes.

The key to humor is a good punch line- The Joker measures 25” tall atop a pogo-fist base inspired by his own suit sleeve, fitted with a dilapidated carnival game head. A giant purple glove is breaking the Gotham courthouse Seal of Justice, while his patented toxic laughing gas leaks from the wreckage. A Batusi 8-track hides on the control panel of The Joker’s crazy contraption along with a carved heart with H+J as a symbol of their mad love.

Bouncing into battle, the Clown Prince of Crime comes equipped with a Joker Fish tommy gun to put the fin-ishing touches on his elaborate scheme. The Joker has a maniacal, scarred portrait with his iconic face makeup and coiffed green hair, and a sculpted purple two-piece suit, green vest, and orange shirt. His suit is outfitted with intricate stitching details as well as a grinning Batgirl plush pinned to his lapel. Always playing the heel to Batman’s heroic antics, The Joker has a custom pair of shoes with ‘Ha!’ treads and a “Your Face Here” on the soles.


Scroll down to see all the pictures.
Click on them for bigger and better views.

The Exclusive edition of The Joker Premium Format™ Figure includes an alternate classic portrait of the Clown Prince of Crime, as well as a swap-out right hand holding a special cake for Batman. These accessories can be swapped out independently of one another, giving you additional display options.


Continua a leggere

Pubblicato in Senza categoria

Using Evolutionary AutoML to Discover Neural Network Architectures

Posted by Esteban Real, Senior Software Engineer, Google Brain Team

The brain has evolved over a long time, from very simple worm brains 500 million years ago to a diversity of modern structures today. The human brain, for example, can accomplish a wide variety of activities, many of them effortlessly — telling whether a visual scene contains animals or buildings feels trivial to us, for example. To perform activities like these, artificial neural networks require careful design by experts over years of difficult research, and typically address one specific task, such as to find what’s in a photograph, to call a genetic variant, or to help diagnose a disease. Ideally, one would want to have an automated method to generate the right architecture for any given task.

One approach to generate these architectures is through the use of evolutionary algorithms. Traditional research into neuro-evolution of topologies (e.g. Stanley and Miikkulainen 2002) has laid the foundations that allow us to apply these algorithms at scale today, and many groups are working on the subject, including OpenAI, Uber Labs, Sentient Labs and DeepMind. Of course, the Google Brain team has been thinking about AutoML too. In addition to learning-based approaches (eg. reinforcement learning), we wondered if we could use our computational resources to programmatically evolve image classifiers at unprecedented scale. Can we achieve solutions with minimal expert participation? How good can today’s artificially-evolved neural networks be? We address these questions through two papers.

In “Large-Scale Evolution of Image Classifiers,” presented at ICML 2017, we set up an evolutionary process with simple building blocks and trivial initial conditions. The idea was to “sit back” and let evolution at scale do the work of constructing the architecture. Starting from very simple networks, the process found classifiers comparable to hand-designed models at the time. This was encouraging because many applications may require little user participation. For example, some users may need a better model but may not have the time to become machine learning experts. A natural question to consider next was whether a combination of hand-design and evolution could do better than either approach alone. Thus, in our more recent paper, “Regularized Evolution for Image Classifier Architecture Search” (2018), we participated in the process by providing sophisticated building blocks and good initial conditions (discussed below). Moreover, we scaled up computation using Google’s new TPUv2 chips. This combination of modern hardware, expert knowledge, and evolution worked together to produce state-of-the-art models on CIFAR-10 and ImageNet, two popular benchmarks for image classification.

A Simple Approach
The following is an example of an experiment from our first paper. In the figure below, each dot is a neural network trained on the CIFAR-10 dataset, which is commonly used to train image classifiers. Initially, the population consists of one thousand identical simple seed models (no hidden layers). Starting from simple seed models is important — if we had started from a high-quality model with initial conditions containing expert knowledge, it would have been easier to get a high-quality model in the end. Once seeded with the simple models, the process advances in steps. At each step, a pair of neural networks is chosen at random. The network with higher accuracy is selected as a parent and is copied and mutated to generate a child that is then added to the population, while the other neural network dies out. All other networks remain unchanged during the step. With the application of many such steps in succession, the population evolves.

Progress of an evolution experiment. Each dot represents an individual in the population. The four diagrams show examples of discovered architectures. These correspond to the best individual (rightmost; selected by validation accuracy) and three of its ancestors.

The mutations in our first paper are purposefully simple: remove a convolution at random, add a skip connection between arbitrary layers, or change the learning rate, to name a few. This way, the results show the potential of the evolutionary algorithm, as opposed to the quality of the search space. For example, if we had used a single mutation that transforms one of the seed networks into an Inception-ResNet classifier in one step, we would be incorrectly concluding that the algorithm found a good answer. Yet, in that case, all we would have done is hard-coded the final answer into a complex mutation, rigging the outcome. If instead we stick with simple mutations, this cannot happen and evolution is truly doing the job. In the experiment in the figure, simple mutations and the selection process cause the networks to improve over time and reach high test accuracies, even though the test set had never been seen during the process. In this paper, the networks can also inherit their parent’s weights. Thus, in addition to evolving the architecture, the population trains its networks while exploring the search space of initial conditions and learning-rate schedules. As a result, the process yields fully trained models with optimized hyperparameters. No expert input is needed after the experiment starts.

In all the above, even though we were minimizing the researcher’s participation by having simple initial architectures and intuitive mutations, a good amount of expert knowledge went into the building blocks those architectures were made of. These included important inventions such as convolutions, ReLUs and batch-normalization layers. We were evolving an architecture made up of these components. The term “architecture” is not accidental: this is analogous to constructing a house with high-quality bricks.

Combining Evolution and Hand Design
After our first paper, we wanted to reduce the search space to something more manageable by giving the algorithm fewer choices to explore. Using our architectural analogy, we removed all the possible ways of making large-scale errors, such as putting the wall above the roof, from the search space. Similarly with neural network architecture searches, by fixing the large-scale structure of the network, we can help the algorithm out. So how to do this? The inception-like modules introduced in Zoph et al. (2017) for the purpose of architecture search proved very powerful. Their idea is to have a deep stack of repeated modules called cells. The stack is fixed but the architecture of the individual modules can change.

The building blocks introduced in Zoph et al. (2017). The diagram on the left is the outer structure of the full neural network, which parses the input data from bottom to top through a stack of repeated cells. The diagram on the right is the inside structure of a cell. The goal is to find a cell that yields an accurate network.

In our second paper, “Regularized Evolution for Image Classifier Architecture Search” (2018), we presented the results of applying evolutionary algorithms to the search space described above. The mutations modify the cell by randomly reconnecting the inputs (the arrows on the right diagram in the figure) or randomly replacing the operations (for example, they can replace the “max 3×3″ in the figure, a max-pool operation, with an arbitrary alternative). These mutations are still relatively simple, but the initial conditions are not: the population is now initialized with models that must conform to the outer stack of cells, which was designed by an expert. Even though the cells in these seed models are random, we are no longer starting from simple models, which makes it easier to get to high-quality models in the end. If the evolutionary algorithm is contributing meaningfully, the final networks should be significantly better than the networks we already know can be constructed within this search space. Our paper shows that evolution can indeed find state-of-the-art models that either match or outperform hand-designs.

A Controlled Comparison
Even though the mutation/selection evolutionary process is not complicated, maybe an even more straightforward approach (like random search) could have done the same. Other alternatives, though not simpler, also exist in the literature (like reinforcement learning). Because of this, the main purpose of our second paper was to provide a controlled comparison between techniques.

Comparison between evolution, reinforcement learning, and random search for the purposes of architecture search. These experiments were done on the CIFAR-10 dataset, under the same conditions as Zoph et al. (2017), where the search space was originally used with reinforcement learning.

The figure above compares evolution, reinforcement learning, and random search. On the left, each curve represents the progress of an experiment, showing that evolution is faster than reinforcement learning in the earlier stages of the search. This is significant because with less compute power available, the experiments may have to stop early. Moreover evolution is quite robust to changes in the dataset or search space. Overall, the goal of this controlled comparison is to provide the research community with the results of a computationally expensive experiment. In doing so, it is our hope to facilitate architecture searches for everyone by providing a case study of the relationship between the different search algorithms. Note, for example, that the figure above shows that the final models obtained with evolution can reach very high accuracy while using fewer floating-point operations.

One important feature of the evolutionary algorithm we used in our second paper is a form of regularization: instead of letting the worst neural networks die, we remove the oldest ones — regardless of how good they are. This improves robustness to changes in the task being optimized and tends to produce more accurate networks in the end. One reason for this may be that since we didn’t allow weight inheritance, all networks must train from scratch. Therefore, this form of regularization selects for networks that remain good when they are re-trained. In other words, because a model can be more accurate just by chance — noise in the training process means even identical architectures may get different accuracy values — only architectures that remain accurate through the generations will survive in the long run, leading to the selection of networks that retrain well. More details of this conjecture can be found in the paper.

The state-of-the-art models we evolved are nicknamed AmoebaNets, and are one of the latest results from our AutoML efforts. All these experiments took a lot of computation — we used hundreds of GPUs/TPUs for days. Much like a single modern computer can outperform thousands of decades-old machines, we hope that in the future these experiments will become household. Here we aimed to provide a glimpse into that future.

We would like to thank Alok Aggarwal, Yanping Huang, Andrew Selle, Sherry Moore, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Alex Kurakin, Quoc Le, Barret Zoph, Jon Shlens, Vijay Vasudevan, Vincent Vanhoucke, Megan Kacholia, Jeff Dean, and the rest of the Google Brain team for the collaborations that made this work possible.

Continua a leggere

Pubblicato in Senza categoria

Balanced Partitioning and Hierarchical Clustering at Scale

Posted by Hossein Bateni, Research Scientist and Kevin Aydin, Software Engineer, NYC Algorithms and Optimization Research Team

Solving large-scale optimization problems often starts with graph partitioning, which means partitioning the vertices of the graph into clusters to be processed on different machines. The need to make sure that clusters are of near equal size gives rise to the balanced graph partitioning problem. In simple terms, we need to partition the vertices of a given graph into k almost equal clusters, while we minimize the number of edges that are cut by the partition. This NP-hard problem is notoriously difficult in practice because the best approximation algorithms for small instances rely on semidefinite programming which is impractical for larger instances.

This post presents the distributed algorithm we developed which is more applicable to large instances. We introduced this balanced graph-partitioning algorithm in our WSDM 2016 paper, and have applied this approach to several applications within Google. Our more recent NIPS 2017 paper provides more details of the algorithm via a theoretical and empirical study.

Balanced Partitioning via Linear Embedding
Our algorithm first embeds vertices of the graph onto a line, and then processes vertices in a distributed manner guided by the linear embedding order. We examine various ways to find the initial embedding, and apply four different techniques (such as local swaps and dynamic programming) to obtain the final partition. The best initial embedding is based on “affinity clustering”.

Affinity Hierarchical Clustering
Affinity clustering is an agglomerative hierarchical graph clustering based on Borůvka’s classic Maximum-cost Spanning Tree algorithm. As discussed above, this algorithm is a critical part of our balanced partitioning tool. The algorithm starts by placing each vertex in a cluster of its own: v0, v1, and so on. Then, in each iteration, the highest-cost edge out of each cluster is selected in order to induce larger merged clusters: A0, A1, A2, etc. in the first round and B0, B1, etc. in the second round and so on. The set of merges naturally produces a hierarchical clustering, and gives rise to a linear ordering of the leaf vertices (vertices with degree one). The image below demonstrates this, with the numbers at the bottom corresponding to the ordering of the vertices.

Our NIPS’17 paper explains how we run affinity clustering efficiently in the massively parallel computation (MPC) model, in particular using distributed hash tables (DHTs) to significantly reduce running time. This paper also presents a theoretical study of the algorithm. We report clustering results for graphs with tens of trillions of edges, and also observe that affinity clustering empirically beats other clustering algorithms such as k-means in terms of “quality of the clusters”. This video contains a summary of the result and explains how this parallel algorithm may produce higher-quality clusters even compared to a sequential single-linkage agglomerative algorithm.

Comparison to Previous Work
In comparing our algorithm to previous work in (distributed) balanced graph partitioning, we focus on FENNEL, Spinner, METIS, and a recent label propagation-based algorithm. We report results on several public social networks as well as a large private map graph. For a Twitter followership graph, we see a consistent improvement of 15–25% over previous results (Ugander and Backstrom, 2013), and for LiveJournal graph, our algorithm outperforms all the others for all cases except k = 2, where ours is slightly worse than FENNEL’s.

The following table presents the fraction of cut edges in the Twitter graph obtained via different algorithms for various values of k, the number of clusters. The numbers given in parentheses denote the size imbalance factor: i.e., the relative difference of the sizes of largest and smallest clusters. Here “Vanilla Affinity Clustering” denotes the first stage of our algorithm where only the hierarchical clustering is built and no further processing is performed on the cuts. Notice that this is already as good as the best previous work (shown in the first two columns below), cutting a smaller fraction of edges while achieving a perfect (and thus better) balance (i.e., 0% imbalance). The last column in the table includes the final result of our algorithm with the post-processing.

Vanilla Affinity
Final Algorithm

We apply balanced graph partitioning to multiple applications including Google Maps driving directions, the serving backend for web search, and finding treatment groups for experimental design. For example, in Google Maps the World map graph is stored in several shards. The navigational queries spanning multiple shards are substantially more expensive than those handled within a shard. Using the methods described in our paper, we can reduce 21% of cross-shard queries by increasing the shard imbalance factor from 0% to 10%. As discussed in our paper, live experiments on real traffic show that the number of multi-shard queries from our cut-optimization techniques is 40% less compared to a baseline Hilbert embedding technique. This, in turn, results in less CPU usage in response to queries. In a future blog post, we will talk about application of this work in the web search serving backend, where balanced partitioning helped us design a cache-aware load balancing system that dramatically reduced our cache miss rate.

We especially thank Vahab Mirrokni whose guidance and technical contribution were instrumental in developing these algorithms and writing this post. We also thank our other co-authors and colleagues for their contributions: Raimondas Kiveris, Soheil Behnezhad, Mahsa Derakhshan, MohammadTaghi Hajiaghayi, Silvio Lattanzi, Aaron Archer and other members of NYC Algorithms and Optimization research team.

Continua a leggere

Pubblicato in Senza categoria

Fieber macht Sinn

Wer fiebert braucht Bettruhe, fühlt sich abgeschlagen, erschöpft und „richtig krank“. Dennoch ist das Fieber selbst keine Krankheit. Ganz im Gegenteil. Fieber wird vom Organismus im Rahmen des Heilungsprozesses eingesetzt, um dem eigenen Immunsystem die besten Arbeitsbedingungen zu schaffen. Dessen ungeachtet lebt eine ganze Industrie davon, Fieber zu senken. Mit teils gravierenden Folgen für die Gesundheit.

„Mütter nehmen sich nicht frei“, heißt es in der TV-Werbung für Wick – DayMed. Und statt sich mit Schüttelfrost und Gliederschmerzen ins Bett zu legen, sieht man die spontan genesene Mama, wie sie fröhlich mit ihrer Tochter einen Schneemann baut. Aspirin Complex wirbt mit 94% zufriedenen Kunden, Grippostad C mit gleich vier Wirkstoffen. Und überall wird suggeriert, dass die Medikamente das Fieber anstandslos beseitigen – und damit die Heilung beschleunigt wird. Tatsächlich stehen zahlreiche Medikamente zu Verfügung, mit denen sich die erhöhte Körpertemperatur rasch wieder auf ein normales Niveau bringen lässt. So wirft man schnell ein Thomapyrin oder Fibrex ein oder gibt dem Kleinkind ein Nurofen-Zäpfchen. Die Frage ist bloß, ob das auch tatsächlich eine gute Idee ist.

Auch bei gesunden Menschen schwankt die Körpertemperatur teils beträchtlich – meist im Bereich zwischen 35,8 und 37,2 Grad. Allein die Stelle, wo gemessen wird, kann einen Unterschied von rund einem Grad ausmachen. Am niedrigsten sind die Werte unter der Achsel, etwas höher bei oraler und deutlich höher bei rektaler Messung. Auch der Zeitpunkt spielt eine Rolle. Am frühen Morgen kann die Temperatur um ein Grad unter den Messwerten des Abends liegen, ohne dass eine Krankheit vorliegt. Gutes Essen treibt die Körperwärme ebenso in die Höhe wie bei Frauen der Eisprung. Doch ohne Vorliegen einer Krankheit ist spätestens bei 38 Grad (rektal gemessen) Schluss – und erst darüber spricht man von Fieber.

Das Immunsystem verschafft sich bessere Arbeitbedingungen

Fieber ist nicht selbst die Krankheit, sondern ein Symptom, eine Reaktion auf interne Abläufe im Organismus: Den konkreten Anlass geben Aktivitäten des Immunsystems, die auf äußere Einflüsse – etwa eine Infektion mit Viren oder Bakterien, oder interne Signale auf Krankheitsherde reagieren.

Im Zuge der Heilungsreaktion wird unter anderem die Bildung von Prostaglandinen angekurbelt. Prostaglandine sind hormonelle Wirkstoffe und erfüllen im Organismus zahlreiche lebensnotwendige Aufgaben – unter anderem fungieren sie auch als Botenstoffe. Sie vermitteln dem im Zwischenhirn angesiedelten Hypothalamus, der als „Thermostat unseres Körpers“ fungiert, dass eine Erhöhung der Temperatur notwendig ist. Dadurch werden die Arbeitsbedingungen des Immunsystems in wesentlichen Bereichen verbessert. Fieber steigert die Tätigkeit des Abwehrsystems, indem es die Ausschüttung verschiedener Botenstoffe und Hormone forciert, die am Immungeschehen beteiligt sind. Dieser Prozess ist hochkomplex, im Detail noch nicht gänzlich erforscht. Dass Fieber auch Bakterien und Viren abtötet ist jedoch falsch.

Die Wirkung des Fiebers auf Mikroorganismen ist eine indirekte und basiert auf der Aktivierung des Immunsystems. Der Hitze selbst halten die meisten Keime dagegen problemlos stand.

Dass gerade eine Heilung im Gange ist, bemerken die Betroffenen nicht. Im Gegenteil: Fiebern ist anstrengend, denn das Aufheizen des Organismus verbraucht eine Menge Energie. Die Herzfrequenz steigt, die Durchblutung in der Haut erhöht sich und der Organismus gerät kräftig ins Schwitzen. Ein schweres Krankheitsgefühl setzt ein. Nebenher verstärken die dabei involvierten Nerven das Schwäche- und Schmerzempfinden. Das stärkt natürlich den Wunsch, dass die Krise mit medikamentöser Hilfe rasch wieder vorbei geht.

Doch diese heftige Reaktion macht schon Sinn. Dabei handelt es sich um eine im Lauf der Evolution eingeführte Rückkoppelung, welche dazu führt, dass kranke Lebewesen mit akuter Entzündung sich zurückziehen und ruhen, damit dem Heilungsverlauf nicht unnötig Energie entzogen wird.

Fiebersenker stören interne Kommunikation
Interessant ist nun die Wirkungsweise der Fiebersenker. Fiebersenker und Schmerzmittel aus der Klasse der nicht-steroidalen Entzündungshemmer wie Acetylsalicylsäure (Aspirin), Ibuprofen oder Diclofenac greifen in diesen wichtigen Reparaturmechanismus ein. Die Medikamente stören für einige Stunden die Herstellung aller Prostaglandine, nicht nur jener die für die als negativ empfundenen Symptome zuständig sind. Dadurch ergibt sich das Risiko zahlreicher Nebenwirkungen, speziell bei Überdosierung oder chronischem Missbrauch.

Interessante Resultate brachte eine im Oktober 2017 veröffentlichte Studie, die in der Notfall-Ambulanz der Kinderklinik Philadelphia durchgeführt wurde. Eingeschlossen waren etwas mehr als 22.000 Patienten mit nicht ernsthaftem Fieber. Etwas mehr als die Hälfte der Kinder erhielt – bei ansonsten weitgehend gleichem Krankheitsbild – fiebersenkende Mittel, entweder Paracetamol (38%), Ibuprofen (19%) oder beides. Die andere Hälfte erhielt keine Fiebersenker.

Unterschiede im Behandlungsablauf wie Röntgenaufnahmen und ähnliches wurden in der Auswertung berücksichtigt. Auch auf Alter, Antibiotika, etc. wurde kontrolliert, so dass in den Resultaten möglichst der pure Effekt der Fiebersenkung übrigblieb. Und der war enorm:

Kinder, deren Fieber gesenkt wurde, hatten einen signifikant längeren Aufenthalt in der Klinik. Die Wahrscheinlichkeit, dass sie mehr als zwei Stunden bleiben mussten, lag beim Doppelten der unbehandelten Kinder. Den mit Abstand längsten Aufenthalt hatten jene Kinder, die beide Medikamente bekamen.

Fiebersenker sind jene Medikamente, die in Kinderkliniken am raschesten gegeben werden. Im Schnitt dauert es 54 Minuten von der Aufnahme bis zur Verabreichung des ersten Fiebersenkers. Wenn die Kinder unter Schmerzen leiden, aber keine erhöhte Temperatur haben, müssen sie hingegen deutlich länger – nämlich durchschnittlich 83 Minuten – warten, bis sie ein Medikament bekommen. „Dass Fieber so deutlich rascher behandelt wird als Schmerz hat mit den vorherrschenden Meinungen der Ärzte zu tun, dass es sich bei Fieber um das deutlich ernsthaftere Symptom handelt“, erklären die Studienautoren und fügen hinzu: „Medizinisch ist dieses Vorgehen unbegründet.“

Wenn schon Fiebersenker, so die aktuellen medizinischen Leitlinien zum Fieber-Management, so sollten diese oral gegeben werden. Bei der rektalen Gabe kommt es häufiger zur Überdosierung. Verschiedene Wirkstoffe sollten nicht kombiniert werden. Gänzlich verworfen wird die Praxis mancher Ärzte, den Eltern nach einem Impftermin Fiebersenker vorsorglich mitzugeben. Diese sind, wie zahlreiche Studien belegen, auch ungeeignet, um Fieberkrämpfen vorzubeugen.

Siegeszug von Paracetamol
In den 1960er Jahren berichtete der australische Mediziner Ralph D.K. Reye über Kinder, die nach der Gabe von Aspirin eine akute Schädigung von Gehirn und Leber erlitten haben.

Diese Problematik ging als „Reye Syndrom“ in die Medizinliteratur ein. Die Zusammenhänge wurden nie wirklich im Detail aufgeklärt und auch über die Häufigkeit des Auftretens dieses Syndroms herrscht Unklarheit. In Deutschland gab es während der letzten Jahrzehnte nur eine Handvoll gesicherter Fälle. Dennoch führten diese Warnungen dazu, dass bei Kindern massiv von der Gabe von Aspirin abgeraten wurde. Speziell im angloamerikanischen Raum wird bis heute vorwiegend Paracetamol verwendet.

Anders als Aspirin oder Ibuprofen wirkt Paracetamol nicht über die Hemmung der Prostaglandine, sondern über direkte Zugänge zu Gehirn und Rückenmark. Wie das jedoch im Detail abläuft, ist bis heute nicht bekannt. Länder mit dem höchsten Verbrauch von Paracetamol wie Australien, die USA oder Großbritannien führen auch die internationalen Rankings bei der Häufigkeit von Asthma, Neurodermitis und Heuschnupfen an. Daraus entstand die Frage, ob möglicherweise Paracetamol hier beteiligt sein könnte.

Allergien und Asthma
Zahlreiche Studien zeigten, dass Kinder, die im ersten Lebensjahr Fiebersenker erhalten, später ein deutlich höheres Allergie-Risiko haben. Die Frage, ob der Einsatz der Fiebersenker ein unmittelbarer Auslöser von Asthma sein könnte, wird kontrovers diskutiert. Während große epidemiologische Arbeiten einen Zusammenhang nahelegen, warnen andere Wissenschaftler vor möglichen Fehlschlüssen. Joanne E. Sordillo, Medizineren in der Harvard Medical School in Boston, präsentierte dazu kürzlich eine Arbeit, in der 1490 Mütter mit ihren Kindern sowohl während der Schwangerschaft als auch während des ersten Lebensjahres auf die Verwendung von Fiebersenkern kontrolliert wurden. Wie weit verbreitet die Anwendung ist, zeigen die Resultate: Nur 30% der Frauen gaben an, dass sie während der Schwangerschaft niemals Paracetamol eingenommen haben. Während des ersten Lebensjahres des Kindes waren es sogar nur 4,5%.

Kinder die besonders häufig Fiebersenker bekamen, hatten ein um ein Drittel höheres Asthmarisiko. „Möglicherweise kommt das Asthma aber auch von den Infekten und nicht von den Medikamenten“, erklärt Studienautorin Sordillo. Gegen diese These spricht allerdings, dass die Kinder von Frauen, die während der ersten Phase der Schwangerschaft Paracetamol eingenommen haben, später ebenfalls ein um ein Drittel höheres Asthmarisiko hatten – unabhängig von jeglichen Infekten beim Kind.

Wer fiebert wird schneller gesund

Dass Fieber selbst lebensgefährlich wird – und speziell ab 41 Grad unmittelbar zum Tod führt, ist ein hartnäckig verbreitetes Märchen. Fieber ist selbst limitierend. Wenn es zu hoch steigt, setzt eine Gegenregulation ein.

Richtig ist hingegen: Wer fiebert wird schneller und nachhaltiger gesund. Die Gesamtüberlebenschance einer Infektion ist bei Fieber deutlich erhöht. Schwerkranke Menschen, die nicht fiebern, haben demnach nicht nur einen verlangsamten Heilungsprozess, sondern auch eine deutlich schlechtere Prognose.

Dieser Effekt wird von manchen Krebstherapeuten eingesetzt, indem Fieber mit Hilfe der Injektion bakterieller Toxine aktiv erzeugt wird. Hohes Fieber kann das Immunsystem zu einer besseren Krebsabwehr stimulieren. Doch obwohl diese Tatsache unstrittig ist, wird der Ansatz in der Onkologie nach wie vor zu wenig erforscht. Zu stark scheint die Abneigung Fieber therapeutisch einzusetzen. Dabei wäre das nichts anderes als ein Nachahmen erfolgreicher Heilungstaktiken der Natur.

Bei diesem Artikel handelt es sich um die leicht gekürzte Version eines Berichts der in der März -Ausgabe der Zeitschrift “Naturarzt” erschienen ist. 

Continua a leggere

Pubblicato in Senza categoria