Oct 25, 2023

Webinar: Mitigating LLM Hallucinations with Deeplearning.ai

Atindriyo Sanyal

CTO

Atindriyo Sanyal

CTO

Watch on YouTube

ChainPoll Paper

In the context of LLMs, “hallucination” refers to a phenomenon where the model generates text that is incorrect, nonsensical, or not real. Since LLMs are not databases or search engines, they would not cite where their response is based on. These models generate text as an extrapolation from the prompt you provided. The result of extrapolation is not necessarily supported by any training data, but is the most correlated from the prompt.

Join in on this workshop where we will showcase some powerful metrics to evaluate the quality of the inputs (data quality, RAG context quality, etc) and outputs (hallucinations) with a focus on both RAG and fine-tuning use cases.

What attendees can expect to takeaway from the workshop:

  • Deep dive into research-backed metrics to evaluate the quality of the inputs (data quality, RAG context quality, etc) and outputs (hallucinations) while building LLM powered applications.

  • Evaluation and experimentation framework while prompt engineering with RAG, as well as while fine-tuning with your own data

  • Demo led practical guide to building guardrails and mitigating hallucinations while building LLM powered applications

This event is inspired by DeepLearning.AI’s GenAI short courses, created in collaboration with AI companies across the globe. Our courses help you learn new skills, tools, and concepts efficiently within 1 hour.

Watch on YouTube

ChainPoll Paper

In the context of LLMs, “hallucination” refers to a phenomenon where the model generates text that is incorrect, nonsensical, or not real. Since LLMs are not databases or search engines, they would not cite where their response is based on. These models generate text as an extrapolation from the prompt you provided. The result of extrapolation is not necessarily supported by any training data, but is the most correlated from the prompt.

Join in on this workshop where we will showcase some powerful metrics to evaluate the quality of the inputs (data quality, RAG context quality, etc) and outputs (hallucinations) with a focus on both RAG and fine-tuning use cases.

What attendees can expect to takeaway from the workshop:

  • Deep dive into research-backed metrics to evaluate the quality of the inputs (data quality, RAG context quality, etc) and outputs (hallucinations) while building LLM powered applications.

  • Evaluation and experimentation framework while prompt engineering with RAG, as well as while fine-tuning with your own data

  • Demo led practical guide to building guardrails and mitigating hallucinations while building LLM powered applications

This event is inspired by DeepLearning.AI’s GenAI short courses, created in collaboration with AI companies across the globe. Our courses help you learn new skills, tools, and concepts efficiently within 1 hour.

Watch on YouTube

ChainPoll Paper

In the context of LLMs, “hallucination” refers to a phenomenon where the model generates text that is incorrect, nonsensical, or not real. Since LLMs are not databases or search engines, they would not cite where their response is based on. These models generate text as an extrapolation from the prompt you provided. The result of extrapolation is not necessarily supported by any training data, but is the most correlated from the prompt.

Join in on this workshop where we will showcase some powerful metrics to evaluate the quality of the inputs (data quality, RAG context quality, etc) and outputs (hallucinations) with a focus on both RAG and fine-tuning use cases.

What attendees can expect to takeaway from the workshop:

  • Deep dive into research-backed metrics to evaluate the quality of the inputs (data quality, RAG context quality, etc) and outputs (hallucinations) while building LLM powered applications.

  • Evaluation and experimentation framework while prompt engineering with RAG, as well as while fine-tuning with your own data

  • Demo led practical guide to building guardrails and mitigating hallucinations while building LLM powered applications

This event is inspired by DeepLearning.AI’s GenAI short courses, created in collaboration with AI companies across the globe. Our courses help you learn new skills, tools, and concepts efficiently within 1 hour.

Watch on YouTube

ChainPoll Paper

In the context of LLMs, “hallucination” refers to a phenomenon where the model generates text that is incorrect, nonsensical, or not real. Since LLMs are not databases or search engines, they would not cite where their response is based on. These models generate text as an extrapolation from the prompt you provided. The result of extrapolation is not necessarily supported by any training data, but is the most correlated from the prompt.

Join in on this workshop where we will showcase some powerful metrics to evaluate the quality of the inputs (data quality, RAG context quality, etc) and outputs (hallucinations) with a focus on both RAG and fine-tuning use cases.

What attendees can expect to takeaway from the workshop:

  • Deep dive into research-backed metrics to evaluate the quality of the inputs (data quality, RAG context quality, etc) and outputs (hallucinations) while building LLM powered applications.

  • Evaluation and experimentation framework while prompt engineering with RAG, as well as while fine-tuning with your own data

  • Demo led practical guide to building guardrails and mitigating hallucinations while building LLM powered applications

This event is inspired by DeepLearning.AI’s GenAI short courses, created in collaboration with AI companies across the globe. Our courses help you learn new skills, tools, and concepts efficiently within 1 hour.

Atindriyo Sanyal

Atindriyo Sanyal

Atindriyo Sanyal

Atindriyo Sanyal

Share this post