PROPAGATING ATTENTION INFORMATION IN EFFICIENT MACHINE LEARNING MODELS

Number of patents in Portfolio can not be more than 2000

United States of America Patent

APP PUB NO 20240160896A1
SERIAL NO

18335685

Stats

ATTORNEY / AGENT: (SPONSORED)

Importance

Loading Importance Indicators... loading....

Abstract

See full text

Certain aspects of the present disclosure provide techniques and apparatus for improved attention-based machine learning. A first attention propagation output is generated using a first transformer block of a plurality of transformer blocks, this generation including processing input data for the first transformer block using a first self-attention sub-block of the first transformer block. The first attention propagation output is propagated to a second transformer block of the plurality of transformer blocks. An output for the second transformer block is generated, this generation including generating output features for the second transformer block based on the first attention propagation output.

Loading the Abstract Image... loading....

First Claim

See full text

Family

Loading Family data... loading....

Patent Owner(s)

Patent OwnerAddress
QUALCOMM INCORPORATED5775 MOREHOUSE DRIVE SAN DIEGO CA 92121-1714

International Classification(s)

  • [Classification Symbol]
  • [Patents Count]

Inventor(s)

Inventor Name Address # of filed Patents Total Citations
GHODRATI, Amir Amsterdam, NL 7 16
HABIBIAN, Amirhossein Amsterdam, NL 27 179
VENKATARAMANAN, Shashanka Amsterdam, NL 2 0

Cited Art Landscape

Load Citation

Patent Citation Ranking

Forward Cite Landscape

Load Citation