The Google Brain Team — Looking Back on 2017 (Part 1 of 2)

Posted by Jeff Dean, Google Senior Fellow, on behalf of the entire Google Brain Team

The Google Brain team works to advance the state of the art in artificial intelligence by research and systems engineering, as one part of the overall Google AI effort. Last year we shared a summary of our work in 2016. Since then, we’ve continued to make progress on our long-term research agenda of making machines intelligent, and have collaborated with a number of teams across Google and Alphabet to use the results of our research to improve people’s lives. This first of two posts will highlight some of our work in 2017, including some of our basic research work, as well as updates on open source software, datasets, and new hardware for machine learning. In the second post we’ll dive into the research we do in specific domains where machine learning can have a large impact, such as healthcare, robotics, and some areas of basic science, as well as cover our work on creativity, fairness and inclusion and tell you a bit more about who we are.

Core Research
A significant focus of our team is pursuing research that advances our understanding and improves our ability to solve new problems in the field of machine learning. Below are several themes from our research last year.

AutoML
The goal of automating machine learning is to develop techniques for computers to solve new machine learning problems automatically, without the need for human machine learning experts to intervene on every new problem. If we’re ever going to have truly intelligent systems, this is a fundamental capability that we will need. We developed new approaches for designing neural network architectures using both reinforcement learning and evolutionary algorithms, scaled this work to state-of-the-art results on ImageNet classification and detection, and also showed how to learn new optimization algorithms and effective activation functions automatically. We are actively working with our Cloud AI team to bring this technology into the hands of Google customers, as well as continuing to push the research in many directions.

Convolutional architecture discovered by Neural Architecture Search
Object detection with a network discovered by AutoML

Speech Understanding and Generation
Another theme is on developing new techniques that improve the ability of our computing systems to understand and generate human speech, including our collaboration with the speech team at Google to develop a number of improvements for an end-to-end approach to speech recognition, which reduces the relative word error rate over Google’s production speech recognition system by 16%. One nice aspect of this work is that it required many separate threads of research to come together (which you can find on Arxiv: 1, 2, 3, 4, 5, 6, 7, 8, 9).

Components of the Listen-Attend-Spell end-to-end model for speech recognition

We also collaborated with our research colleagues on Google’s Machine Perception team to develop a new approach for performing text-to-speech generation (Tacotron 2) that dramatically improves the quality of the generated speech. This model achieves a mean opinion score (MOS) of 4.53 compared to a MOS of 4.58 for professionally recorded speech like you might find in an audiobook, and 4.34 for the previous best computer-generated speech system. You can listen for yourself.

Tacotron 2’s model architecture

New Machine Learning Algorithms and Approaches
We continued to develop novel machine learning algorithms and approaches, including work on capsules (which explicitly look for agreement in activated features as a way of evaluating many different noisy hypotheses when performing visual tasks), sparsely-gated mixtures of experts (which enable very large models that are still computational efficient), hypernetworks (which use the weights of one model to generate weights for another model), new kinds of multi-modal models (which perform multi-task learning across audio, visual, and textual inputs in the same model), attention-based mechanisms (as an alternative to convolutional and recurrent models), symbolic and non-symbolic learned optimization methods, a technique to back-propagate through discrete variables, and a few new reinforcement learning algorithmic improvements.

Machine Learning for Computer Systems
The use of machine learning to replace traditional heuristics in computer systems also greatly interests us. We have shown how to use reinforcement learning to make placement decisions for mapping computational graphs onto a set of computational devices that are better than human experts. With other colleagues in Google Research, we have shown in “The Case for Learned Index Structures” that neural networks can be both faster and much smaller than traditional data structures such as B-trees, hash tables, and Bloom filters. We believe that we are just scratching the surface in terms of the use of machine learning in core computer systems, as outlined in a NIPS workshop talk on Machine Learning for Systems and Systems for Machine Learning.

Learned Models as Index Structures

Privacy and Security
Machine learning and its interactions with security and privacy continue to be major research foci for us. We showed that machine learning techniques can be applied in a way that provides differential privacy guarantees, in a paper that received one of the best paper awards at ICLR 2017. We also continued our investigation into the properties of adversarial examples, including demonstrating adversarial examples in the physical world, and how to harness adversarial examples at scale during the training process to make models more robust to adversarial examples.

Understanding Machine Learning Systems
While we have seen impressive results with deep learning, it is important to understand why it works, and when it won’t. In another one of the best paper awards of ICLR 2017, we showed that current machine learning theoretical frameworks fail to explain the impressive results of deep learning approaches. We also showed that the “flatness” of minima found by optimization methods is not as closely linked to good generalization as initially thought. In order to better understand how training proceeds in deep architectures, we published a series of papers analyzing random matrices, as they are the starting point of most training approaches. Another important avenue to understand deep learning is to better measure their performance. We showed the importance of good experimental design and statistical rigor in a recent study comparing many GAN approaches that found many popular enhancements to generative models do not actually improve performance. We hope this study will give an example for other researchers to follow in making robust experimental studies.

We are developing methods that allow better interpretability of machine learning systems. And in March, in collaboration with OpenAI, DeepMind, YC Research and others, we announced the launch of Distill, a new online open science journal dedicated to supporting human understanding of machine learning. It has gained a reputation for clear exposition of machine learning concepts and for excellent interactive visualization tools in its articles. In its first year, Distill has published many illuminating articles aimed at understanding the inner working of various machine learning techniques, and we look forward to the many more sure to come in 2018.

Feature Visualization
How to Use t-SNE effectively

Open Datasets for Machine Learning Research
Open datasets like MNIST, CIFAR-10, ImageNet, SVHN, and WMT have pushed the field of machine learning forward tremendously. Our team and Google Research as a whole have been active in open-sourcing interesting new datasets for open machine learning research over the past year or so, by providing access to more large labeled datasets including:

Examples from the YouTube-Bounding Boxes dataset: Video segments sampled at 1 frame per second, with bounding boxes successfully identified around the items of interest.

TensorFlow and Open Source Software

A map showing the broad distribution of TensorFlow users (source)

Throughout our team’s history, we have built tools that help us to conduct machine learning research and deploy machine learning systems in Google’s many products. In November 2015, we open-sourced our second-generation machine learning framework, TensorFlow, with the hope of allowing the machine learning community as a whole to benefit from our investment in machine learning software tools. In February, we released TensorFlow 1.0, and in November, we released v1.4 with these significant additions: Eager execution for interactive imperative-style programming, XLA, an optimizing compiler for TensorFlow programs, and TensorFlow Lite, a lightweight solution for mobile and embedded devices. The pre-compiled TensorFlow binaries have now been downloaded more than 10 million times in over 180 countries, and the source code on GitHub now has more than 1,200 contributors.

In February, we hosted the first ever TensorFlow Developer Summit, with over 450 people attending live in Mountain View and more than 6,500 watching on live streams around the world, including at more than 85 local viewing events in 35 countries. All talks were recorded, with topics ranging from new features, techniques for using TensorFlow, or detailed looks under the hoods at low-level TensorFlow abstractions. We’ll be hosting another TensorFlow Developer Summit on March 30, 2018 in the Bay Area. Sign up now to save the date and stay updated on the latest news.

This rock-paper-scissors science experiment is a novel use of TensorFlow. We’ve been excited by the wide variety of uses of TensorFlow we saw in 2017, including automating cucumber sorting, finding sea cows in aerial imagery, sorting diced potatoes to make safer baby food, identifying skin cancer, helping to interpret bird call recordings in a New Zealand bird sanctuary, and identifying diseased plants in the most popular root crop on Earth in Tanzania!

In November, TensorFlow celebrated its second anniversary as an open-source project. It has been incredibly rewarding to see a vibrant community of TensorFlow developers and users emerge. TensorFlow is the #1 machine learning platform on GitHub and one of the top five repositories on GitHub overall, used by many companies and organizations, big and small, with more than 24,500 distinct repositories on GitHub related to TensorFlow. Many research papers are now published with open-source TensorFlow implementations to accompany the research results, enabling the community to more easily understand the exact methods used and to reproduce or extend the work.

TensorFlow has also benefited from other Google Research teams open-sourcing related work, including TF-GAN, a lightweight library for generative adversarial models in TensorFlow, TensorFlow Lattice, a set of estimators for working with lattice models, as well as the TensorFlow Object Detection API. The TensorFlow model repository continues to grow with an ever-widening set of models.

In addition to TensorFlow, we released deeplearn.js, an open-source hardware-accelerated implementation of deep learning APIs right in the browser (with no need to download or install anything). The deeplearn.js homepage has a number of great examples, including Teachable Machine, a computer vision model you train using your webcam, and Performance RNN, a real-time neural-network based piano composition and performance demonstration. We’ll be working in 2018 to make it possible to deploy TensorFlow models directly into the deeplearn.js environment.

TPUs

Cloud TPUs deliver up to 180 teraflops of machine learning acceleration

About five years ago, we recognized that deep learning would dramatically change the kinds of hardware we would need. Deep learning computations are very computationally intensive, but they have two special properties: they are largely composed of dense linear algebra operations (matrix multiples, vector operations, etc.), and they are very tolerant of reduced precision. We realized that we could take advantage of these two properties to build specialized hardware that can run neural network computations very efficiently. We provided design input to Google’s Platforms team and they designed and produced our first generation Tensor Processing Unit (TPU): a single-chip ASIC designed to accelerate inference for deep learning models (inference is the use of an already-trained neural network, and is distinct from training). This first-generation TPU has been deployed in our data centers for three years, and it has been used to power deep learning models on every Google Search query, for Google Translate, for understanding images in Google Photos, for the AlphaGo matches against Lee Sedol and Ke Jie, and for many other research and product uses. In June, we published a paper at ISCA 2017, showing that this first-generation TPU was 15X – 30X faster than its contemporary GPU or CPU counterparts, with performance/Watt about 30X – 80X better.

Cloud TPU Pods deliver up to 11.5 petaflops of machine learning acceleration
Experiments with ResNet-50 training on ImageNet show near-perfect speed-up as the number of TPU devices used increases.

Inference is important, but accelerating the training process is an even more important problem – and also much harder. The faster researchers can try a new idea, the more breakthroughs we can make. Our second-generation TPU, announced at Google I/O in May, is a whole system (custom ASIC chips, board and interconnect) that is designed to accelerate both training and inference, and we showed a single device configuration as well as a multi-rack deep learning supercomputer configuration called a TPU Pod. We announced that these second generation devices will be offered on the Google Cloud Platform as Cloud TPUs. We also unveiled the TensorFlow Research Cloud (TFRC), a program to provide top ML researchers who are committed to sharing their work with the world with access to a cluster of 1,000 Cloud TPUs for free. In December, we presented work showing that we can train a ResNet-50 ImageNet model to a high level of accuracy in 22 minutes on a TPU Pod as compared to days or longer on a typical workstation. We think lowering research turnaround times in this fashion will dramatically increase the productivity of machine learning teams here at Google and at all of the organizations that use Cloud TPUs. If you’re interested in Cloud TPUs, TPU Pods, or the TensorFlow Research Cloud, you can sign up to learn more at g.co/tpusignup. We’re excited to enable many more engineers and researchers to use TPUs in 2018!

Thanks for reading!

(In part 2 we’ll discuss our research in the application of machine learning to domains like healthcare, robotics, different fields of science, and creativity, as well as cover our work on fairness and inclusion.)

Continua a leggere

Pubblicato in Senza categoria

Hot Toys Star Wars: Episode V The Empire Strikes Back 1/6th scale Boba Fett (Deluxe Version) 12-inch Collectible Figure Preview

Pre-order

Pre-order Hot Toys Star Wars: The Empire Strikes Back MMS464 Boba Fett (Deluxe) 1/6th Scale Collectible Figure from BBTS – link HERE

Armed with a customized Mandalorian armor, dangerous weaponry and highly trained combat skills, Boba Fett has earned a notorious reputation as one of the deadliest bounty hunter in the galaxy as he takes on contracts from the criminal underworld and the Galactic Empire.

Boba Fett has left an unforgettable impression for many Star Wars fans when he was introduce on the silver screen in Star Wars: The Episode V The Empire Strikes Back, and today, Hot Toys is thrilled to present a very special Deluxe Version of the 1/6th scale Boba Fett collectible figure that will surely excite many Star Wars diehard fans with its unique display option!

Based on his appearance in Star Wars: Episode V The Empire Strikes Back, the Boba Fett collectible figure features a meticulously crafted Mandalorian helmet and armor with distressed effects, his iconic jetpack, a cape, detailed blasters, and a figure stand!

The biggest highlight of this Deluxe Version is none other than the fact that Boba Fett’s Alternate Version armor based on his “movie pre-production” look is realized for the first time on a 1/6th scale collectible figure and exclusively include a range of interchangeable parts such as helmet, jetpack, gauntlets, cape, and a number of gloved hands for fans to display this infamous bounty hunter’s stylistic look before his official appearance in Star Wars: The Episode V The Empire Strikes Back!

Pre-order

Scroll down to see all the pictures.
Click on them for bigger and better views.

The following “Alternate Version Boba Fett Armor” parts are exclusive to MMS464 DELUXE VERSION: Newly developed interchangeable Boba Fett’s Mandalorian helmet with unique markings and articulated rangefinder | Seven (7) pieces of dark red and light brown colored interchangeable gloved hands including: pair of fists, pair of relaxed hands, pair of hands for holding weapons, gesturing left hand

“Alternate Version Boba Fett Armor” Costume: green colored cape with weathering effect, light brown colored pouch (attachable top the belt), red-orange colored pouch (attachable top the belt), yellow colored left arm gauntlet, red colored right arm gauntlet, pair of spats

“Alternate Version Boba Fett Armor” Weapon: blaster pistol with stock

“Alternate Version Boba Fett Armor” Accessory: One green, yellow and white color jetpack with weathering effect (equipped with magnetic feature)

Hot Toys MMS464 1/6th scale Boba Fett (Deluxe Version) Collectible Figure specially features: Authentic and detailed likeness of Boba Fett in Star Wars: Episode V The Empire Strikes Back | Newly crafted Boba Fett Mandalorian helmet with articulated rangefinder | Specially applied distress effects on armor, weapons and accessories | Approximately 30 cm tall Body with over 30 points of articulations | Seven (7) pieces of blue-colored interchangeable gloved hands including: pair of fists, pair of relaxed hands, pair of hands for holding weapons, gesturing left hand

Costume: Boba Fett’s Mandalorian armor with weathering effect, gray flight suit, yellow cape, leather-like brown belt with pouches, pair of green gauntlets, pair of yellow knee guards, pair of brown boots

Weapons: blaster rifle with battle-damaged effect, blaster sidearm with leather-like holster

Accessories: jetpack with weathering effect (equipped with magnetic feature), survival knife, sonic beam weapon, anti-security blade, jetpack adjustment tool, Specially designed figure stand with Boba Fett’s nameplate and Star Wars logo

Release date: Approximately Q4, 2018 – Q1, 2019

Pre-order

Continua a leggere

Pubblicato in Senza categoria

Hot Toys SW: Episode V The Empire Strikes Back – 1/6th scale Boba Fett collectible figure

Pre-order

Pre-order Hot Toys Star Wars: The Empire Strikes Back MMS463 Boba Fett 1/6th Scale Collectible Figure from BBTS – link HERE

With his customized Mandalorian armor, deadly weaponry, and silent demeanor, Boba Fett was one of the most feared bounty hunters in the galaxy.

A genetic clone of his “father,” bounty hunter Jango Fett, Boba learned combat and martial skills from a young age. Over the course of his career, which included contracts for the Empire and the criminal underworld, he became a legend.

Expanding our Star Wars classic trilogy collectible series, Hot Toys is excited to officially present the 1/6th scale collectible figure of Boba Fett from his first silver screen appearance in Star Wars: Episode V The Empire Strikes Back!

The highly-accurate collectible figure is specially crafted based on the appearance of Boba Fett in Star Wars: Episode V The Empire Strikes Back featuring meticulously crafted Mandalorian helmet and armor, the bounty hunter’s iconic jetpack, a cape, detailed blasters, and a figure stand

Pre-order

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

MMS463 1/6th scale Boba Fett Collectible Figure specially features: Authentic and detailed likeness of Boba Fett in Star Wars: Episode V The Empire Strikes Back | Newly crafted Boba Fett Mandalorian helmet with articulated rangefinder | Specially applied distress effects on armor, weapons and accessories | Approximately 30 cm tall Body with over 30 points of articulations | Seven (7) pieces of blue-colored interchangeable gloved hands including: pair of fists, pair of relaxed hands, pair of hands for holding weapons, gesturing left hand

Costume: Boba Fett’s Mandalorian armor with weathering effect, gray flight suit, yellow cape, leather-like brown belt with pouches, green gauntlets, yellow knee guards, brown boots

Weapons: blaster rifle with battle-damaged effect, blaster sidearm with leather-like holster

Accessories: jetpack with weathering effect (equipped with magnetic feature), survival knife, sonic beam weapon, anti-security blade, jetpack adjustment tool, Specially designed figure stand with Boba Fett’s nameplate and Star Wars logo

Pre-order

Continua a leggere

Pubblicato in Senza categoria

King of Figures KOF004 1/6th scale Creator 08 12" figure aka Michael Fassbender as David 08

Alien: Covenant is a 2017 science fiction horror film directed by Ridley Scott, and written by John Logan and Dante Harper from a story by Michael Green and Jack Paglen. A joint American and British production, the film is a sequel to Prometheus (2012), the second installment in the Alien prequel series and the sixth installment overall in the Alien film series, as well as the third directed by Scott. The film features returning star Michael Fassbender reprising his role as David from Prometheus and Katherine Waterston, with Billy Crudup, Danny McBride and Demián Bichir in supporting roles. It follows the crew of a colony ship that lands on an uncharted planet and makes a terrifying discovery.

King of Figures KOF004 1/6th scale Creator 08 12-inch figure aka Michael Fassbender as David 08 comes with: Head sculpt, Body, 5 interchangeable hands in different poses, Grey jacket and pants, Green jacket, Feet in slippers, Plimsolls, Blue wine glass, Biological weapon tank, Container for genes, Figure stand, Diorama base

Scroll down to see all the pictures.
Click on them for bigger and better views.

Continua a leggere

Pubblicato in Senza categoria

Introducing the CVPR 2018 Learned Image Compression Challenge

Posted by Michele Covell, Research Scientist, Google Research

Edit 17/01/2018: Due to popular request, the CLIC competition submission deadline has been extended to April 22. Please see compression.cc for more details.

Image compression is critical to digital photography — without it, a 12 megapixel image would take 36 megabytes of storage, making most websites prohibitively large. While the signal-processing community has significantly improved image compression beyond JPEG (which was introduced in the 1980’s) with modern image codecs (e.g., BPG, WebP), many of the techniques used in these modern codecs still use the same family of pixel transforms as are used in JPEG. Multiple recent Google projects improve the field of image compression with end-to-end with machine learning, compression through superresolution and creating perceptually improved JPEG images, but we believe that even greater improvements to image compression can be obtained by bringing this research challenge to the attention of the larger machine learning community.

To encourage progress in this field, Google, in collaboration with ETH and Twitter, is sponsoring the Workshop and Challenge on Learned Image Compression (CLIC) at the upcoming 2018 Computer Vision and Pattern Recognition conference (CVPR 2018). The workshop will bring together established contributors to traditional image compression with early contributors to the emerging field of learning-based image compression systems. Our invited speakers include image and video compression experts Jim Bankoski (Google) and Jens Ohm (RWTH Aachen University), as well as computer vision and machine learning experts with experience in video and image compression, Oren Rippel (WaveOne) and Ramin Zabih (Google, on leave from Cornell).

Training set of 1,633 uncompressed images from both the Mobile and Professional datasets, available on compression.cc

A database of copyright-free, high-quality images will be made available both for this challenge and in an effort to accelerate research in this area: Dataset P (“professional”) and Dataset M (“mobile”). The datasets are collected to be representative for images commonly used in the wild, containing thousands of images. While the challenge will allow participants to train neural networks or other methods on any amount of data (but we expect participants to have access to additional data, such as ImageNet and the Open Images Dataset), it should be possible to train on the datasets provided.

The first large-image compression systems using neural networks were published in 2016 [Toderici2016, Ballé2016] and were only just matching JPEG performance. More recent systems have made rapid advances, to the point that they match or exceed the performance of modern industry-standard image compression [Ballé2017, Theis2017, Agustsson2017, Santurkar2017, Rippel2017]. This rapid advance in the quality of neural-network-based compression systems, based on the work of a comparatively small number of research labs, leads us to expect even more impressive results when the area is explored by a larger portion of the machine-learning community.

We hope to get your help advancing the state-of-the-art in this important application area, and we encourage you to participate if you are planning to attend CVPR this year! Please see compression.cc for more details about the new datasets and important workshop deadlines. Training data is already available on that site. The test set will be released on February 15 and the deadline for submitting the compressed versions of the test set is February 22. Edit 17/01/2018: Due to popular request, the CLIC competition submission deadline has been extended to April 22. Please see compression.cc for more details.

Continua a leggere

Pubblicato in Senza categoria

Check out ToysCity 1/6th scale Diorama series – Road culvert scene for 12-inch action figures

The word diorama /ˌdaɪəˈrɑːmə/ can either refer to a 19th-century mobile theatre device, or, in modern usage, a three-dimensional full-size or miniature model, sometimes enclosed in a glass showcase for a museum. Dioramas are often built by hobbyists as part of related hobbies such as military vehicle modeling, miniature figure modeling, or aircraft modeling.

The current, popular understanding of the term “diorama” denotes a partially three-dimensional, full-size replica or scale model of a landscape typically showing historical events, nature scenes or cityscapes, for purposes of education or entertainment.

This is ToysCity 1/6th scale Diorama series – Road culvert scene for 12-inch action figures

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Related posts:
Red Diorama “I shot the bus driver” scene: Recreate opening scene from The Dark Knight – posted on my toy blog HERE
Red Diorama “Throne” scene: the perfect setting for Loki, god of mischief, adoptive brother of Thor (pics HERE)
Unbelievable IHNS TOYS SO-00001 1/6th scale The Cave scene platform – diorama of dioramas! posted HERE

Continua a leggere

Pubblicato in Senza categoria

Preview TBLeague 1/6th Scale Arkhalla Queen of Vampires Figure, the Undying Queen of Ur

Pre-order TBLeague Queen of Vampires Arkhalla 1/6 Scale Figure from BBTS – link HERE

In the beginning of the Bronze Age, 5,000 years ago, her name was whispered in fear across the cities and kingdoms of early man. For uncounted years, she has ruled from her city of Ur, implacable, cruel, an undying creature feasting on human blood, her demonic powers making monsters of those she infects and enslaving all others under her reign of terror.

Her only weakness… a still all-too-human heart. Arkhalla, the Undying Queen of Ur

TBLeague 1/6th Scale Arkhalla Queen of Vampires Action Figure features: head sculpt, TB League female seamless body with metal skeleton, 3 pairs x interchangeable hands, 2 pairs x detachable feet, Asag’s Crown, armbands, bracelets, anklets, pair x pasties, bikini-style top, belt, skirt, Queen’s Scepter, Sacrificial Knife, Sickle Sword, Skull Goblet, base (A representation of “ASAG’S MIRROR” in the story)

Scroll down to see all the pictures.
Click on them for bigger and better views.

Continua a leggere

Pubblicato in Senza categoria

Off and Running in 2018

Well 2018 is here and we are back it with a lot of news and notes to get to.  Normally I use the first post of the year to have recaps and predictions but I am going to push that into the next couple of weeks because there are other items to cover.

–  On the company side of things we had a deal involving one of the companies in this industry I respect the most.  In announcement soon after the new year, Technical Glass Products (TGP) sold to Allegion, plc.  I don’t know much about Allegion but I do know they got an amazing company with incredible people.  Congrats to the Razwick family on the deal.  Everyone at TGP has always treated me extremely well and I am happy for them as this situation really looks positive for everyone involved.

–  Aside from this one there are a few other deals that are bursting at the seams and I expect announcements in the next week or two.

–  Also one of my favorite companies had a major rebrand at the end of the year- SC Railing is now known as Trex Commercial Products.  SC sold to Trex last July, so this makes sense to bring them into that family.  No matter the name, the influence and growth that SC/Trex has had in the industry has been quite impressive.

–  And while I am talking about words like impressive and favorite, a gentleman who fits both descriptions took on a new position at the start of the year.  Dan Wright was named President of Paragon Tempered Glass, moving up from his previous role as VP of Sales & Marketing.  I cannot tell you how happy I am for Dan- a wonderful guy who I knew was destined for greatness way back in the day when he had to put up with my constant whines and cries when I was a customer of his.  Congrats Dan- you will do awesome in this role!

–  Good news from the last Architectural Billing Index for 2017.  The ABI crushed it with a crazy score of 55.0 – which is highest of the year and really had people talking.  The main analyst from AIA chimed in with this positive nugget:

“Not only are design billings overall seeing their strongest growth of the year, the strength is reflected in all major regions and construction sectors,” said AIA chief economist, Kermit Baker, Hon. AIA, PhD. “The construction industry continues to show surprising momentum heading into 2018.”

So we are geared up nicely… so as I noted above I am saving my recaps and predictions for the next weeks, and one reason is I want your input.  I have a poll posted on my twitter feed at @maxpsolesource that you can add your vote (and feel free to comment below that poll or here) that will add into my thoughts.  This obviously is very unscientific and my twitter reach not huge, but an interesting angle to me nonetheless. 

–  Last but certainly not least this week… there was some very sad news to report out of the gate as well.  Peter de Gorter, President & CEO of DeGorter Inc. passed away right before the new year.  The news was incredibly sad to me, the

de Gorter family has such a strong positive presence in our industry and have been so active- that a loss like this shakes you.  My thoughts & condolences the entire de Gorter family.

LINKS of the WEEK

Scary stuff but it is the world we live in now… digital tracking.
I think I could eat at the same place for 428 straight days..lol- I think I did that in College.  Seriously though this guy did it… at Chipotle.
Gotta be more to this story, but biggest question is “why bring the date home?”
VIDEO of the WEEK 

Too long of a video but some great pieces here.. the Top 100 viral videos of 2017….

Continua a leggere

Pubblicato in Senza categoria

ZCWO Premier Collection 1/6th scale Police Tactical Unit Sir Zhan 2.0 12-inch action figure

The Police Tactical Unit (Abbreviation: PTU; Chinese: 警察機動部隊) is a unit within the Hong Kong Police Force which provides an immediate manpower reserve for use in large-scale emergencies. Unit companies are attached to all land Regions and are available for internal security, crowd control, anti-crime operations, disaster response and riot control throughout Hong Kong. The PTU is often referred as the ‘Blue Berets’, which is in reference to the blue berets worn as part of the uniform. The PTU base and training camp is located in Fanling. The PTU is also the parent organization of the Special Duties Unit (SDU) which specializes in counter-terrorism and hostage rescue.

ZCWO Premier Collection 1/6th scale Police Tactical Unit Sir Zhan 2.0 12-inch action figure Comes with: Real like head sculpt, New ZCWO-AB04 body, 3 pairs of interchangeable hands, Police uniform shirt, Police uniform pants, PTU beret, PASGT helmet, Black boots, Bullet proof vest, Police equipment duty belt, Remington 870 shotgun, Smith & Wesson Model 10 with holster

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Continua a leggere

Pubblicato in Senza categoria