RESPONDING TO HALLUCINATIONS IN GENERATIVE LARGE LANGUAGE MODELS

Number of patents in Portfolio can not be more than 2000

United States of America

APP PUB NO 20250094866A1
SERIAL NO

18678914

Stats

ATTORNEY / AGENT: (SPONSORED)

Importance

Loading Importance Indicators... loading....

Abstract

See full text

Techniques for correcting hallucinations produced by generative large language models (LLMs). In one technique, a computing system accesses first output generated by an LLM. The computing system identifies, within the first output, a plurality of assertions. The computing system determines that a first assertion in the plurality of assertions is false. The computing system generates a prompt that indicates that the first assertion is false. The computing system submits the prompt as input to the LLM. The computing system accesses second output that is generated by the LLM, where the second output includes a second assertion that is different than the first assertion and corresponds to the first assertion.

Loading the Abstract Image... loading....

First Claim

See full text

Family

Loading Family data... loading....

Patent Owner(s)

Patent OwnerAddress
ORACLE INTERNATIONAL CORPORATION500 ORACLE PARKWAY MAIL STOP 5OP7 REDWOOD SHORES CA 94065

International Classification(s)

  • [Classification Symbol]
  • [Patents Count]

Inventor(s)

Inventor Name Address # of filed Patents Total Citations
Guo, Mengqing Redmond, US 14 0
Hu, Yazhe Bellevue, US 14 2
Mamtani, Vinod Murli Bellevue, US 19 253
Qian, Jun Bellevue, US 175 7315
Sheng, Tao Bellevue, US 99 693
Wang, Zheng Sammamish, US 534 5554

Cited Art Landscape

Load Citation

Patent Citation Ranking

Forward Cite Landscape

Load Citation