Check out Hot Toys 1/6th scale Alita: Battle Angel 12-inch Collectible Figure Preview Pictures

Pre-order Hot Toys MMS520 Alita: Battle Angel 1/6th Scale Collectible Figure from BBTS (link HERE)

“I think you are someone very special.” – Dr. Ido

The long-awaited movie Alita: Battle Angel is finally coming to the big screen! Set in the future when robotic technology is thriving, this live-action adaptation follows an amnesiac cyborg, Alita, who finds herself awakened in a future world that she does not recognize. She comes to rely on a compassionate doctor and her street-smart friend, struggles to survive the treacherous journey across the Iron City, and discovers her extraordinary past. In the adventure of love, hope and empowerment, everything is new to Alita, every experience a first.

To get fans ready for this super exciting film, Hot Toys is pleased to bring to you the groundbreaking new heroine Alita in 1/6th scale collectible figure from Alita: Battle Angel.

Meticulously crafted based on the appearance of Alita from the film, the movie-accurate collectible figure features a newly developed head sculpt with separate rolling eyeballs, her highly detailed body which displays the complicated mechanical design, beautifully tailored outfit with fine textures, a blade, a heart attachable to the body and multiple interchangeable hands and feet to match with the cyborg body. Along with the figure, it also comes with an elaborated diorama figure stand inspired by the battle scenes.

Scroll down to see the rest of the pictures.
Click on them for bigger and better views.

Hot Toys MMS520 1/6th scale Alita Collectible Figure specially features: A newly developed head sculpt with authentic and detailed likeness of Alita in Alita: Battle Angel equipped with separate rolling eyeballs features | Movie-accurate facial expression and make-up | Dark brown color medium-length hair sculpture (with magnetic feature) | Multiple shades of metallic purple and black, mixed with silver-colored painting on the mechanical body design | Approximately 27 cm tall Newly developed body with over 30 points of articulation | Seven (7) pieces of mechanical interchangeable hands including: pair of fists, pair of relaxed hands, pair of gesturing hands, right hand for holding blade, pair of interchangeable feet with mechanical details | Enhanced articulations allowing highly flexible movement

Costume: specially tailored black-colored leather-like vest, black-colored leather-like pants, interchangeable black-colored boots

Weapon: blade

Accessories: heart (attachable to body) | ruin city themed diorama figure stand with transparent pole, character nameplate and movie logo

Release date: Approximately Q4, 2019 – Q1, 2020

Continua a leggere

Pubblicato in Senza categoria

Transformer-XL: Unleashing the Potential of Attention Models

Posted by Zhilin Yang and Quoc Le, Google AI

To correctly understand an article, sometimes one will need to refer to a word or a sentence that occurs a few thousand words back. This is an example of long-range dependence — a common phenomenon found in sequential data — that must be understood in order to handle many real-world tasks. While people do this naturally, modeling long-term dependency with neural networks remains a challenge. Gating-based RNNs and the gradient clipping technique improve the ability of modeling long-term dependency, but are still not sufficient to fully address this issue.

One way to approach this challenge is to use Transformers, which allows direct connections between data units, offering the promise of better capturing long-term dependency. However, in language modeling, Transformers are currently implemented with a fixed-length context, i.e. a long text sequence is truncated into fixed-length segments of a few hundred characters, and each segment is processed separately.

Vanilla Transformer with a fixed-length context at training time.

This introduces two critical limitations:

  1. The algorithm is not able to model dependencies that are longer than a fixed length.
  2. The segments usually do not respect the sentence boundaries, resulting in context fragmentation which leads to inefficient optimization. This is particularly troublesome even for short sequences, where long range dependency isn’t an issue.

To address these limitations, we propose Transformer-XL a novel architecture that enables natural language understanding beyond a fixed-length context. Transformer-XL consists of two techniques: a segment-level recurrence mechanism and a relative positional encoding scheme.

Segment-level Recurrence
During training, the representations computed for the previous segment are fixed and cached to be reused as an extended context when the model processes the next new segment. This additional connection increases the largest possible dependency length by N times, where N is the depth of the network, because contextual information is now able to flow across segment boundaries. Moreover, this recurrence mechanism also resolves the context fragmentation issue, providing necessary context for tokens in the front of a new segment.

Transformer-XL with segment-level recurrence at training time.

Relative Positional Encodings
Naively applying segment-level recurrence does not work, however, because the positional encodings are not coherent when we reuse the previous segments. For example, consider an old segment with contextual positions [0, 1, 2, 3]. When a new segment is processed, we have positions [0, 1, 2, 3, 0, 1, 2, 3] for the two segments combined, where the semantics of each position id is incoherent through out the sequence. To this end, we propose a novel relative positional encoding scheme to make the recurrence mechanism possible. Moreover, different from other relative positional encoding schemes, our formulation uses fixed embeddings with learnable transformations instead of learnable embeddings, and thus is more generalizable to longer sequences at test time. When both of these approaches are combined, Transformer-XL has a much longer effective context than a vanilla Transformer model at evaluation time.

Vanilla Transformer with a fixed-length context at evaluation time.

Transformer-XL with segment-level recurrence at evaluation time./td>

Furthermore, Transformer-XL is able to process the elements in a new segment all together without recomputation, leading to a significant speed increase (discussed below).

Transformer-XL obtains new state-of-the-art (SoTA) results on a variety of major language modeling (LM) benchmarks, including character-level and word-level tasks on both long and short sequences. Empirically, Transformer-XL enjoys three benefits:

  1. Transformer-XL learns dependency that is about 80% longer than RNNs and 450% longer than vanilla Transformers, which generally have better performance than RNNs, but are not the best for long-range dependency modeling due to fixed-length contexts (please see our paper for details).
  2. Transformer-XL is up to 1,800+ times faster than a vanilla Transformer during evaluation on language modeling tasks, because no re-computation is needed (see figures above).
  3. Transformer-XL has better performance in perplexity (more accurate at predicting a sample) on long sequences because of long-term dependency modeling, and also on short sequences by resolving the context fragmentation problem.

Transformer-XL improves the SoTA bpc/perplexity from 1.06 to 0.99 on enwiki8, from 1.13 to 1.08 on text8, from 20.5 to 18.3 on WikiText-103, from 23.7 to 21.8 on One Billion Word, and from 55.3 to 54.5 on Penn Treebank (without fine tuning). We are the first to break through the 1.0 barrier on char-level LM benchmarks.

We envision many exciting potential applications of Transformer-XL, including but not limited to improving language model pretraining methods such as BERT, generating realistic, long articles, and applications in the image and speech domains, which are also important areas in the world of long-term dependency. For more detail, please see our paper.

The code, pretrained models, and hyperparameters used in our paper are also available in both Tensorflow and PyTorch on GitHub.

Continua a leggere

Pubblicato in Senza categoria