Knowledge Centre


Resources: CustomGPT uses no-code visual builder is easy to use, even non-technical people have built amazing custom GPT chatbots.

Two ways to get started: Bring your AI vision to life without writing any code. Get started by uploading documents OR use website content, and use our easy no-code visual builder to build your custom GPT chatbot.

  • Build using documents: Start by uploading some documents and get a custom chatbot in seconds. Just select "Create Project" and then the "Upload" tab to upload documents. We support 1400+ document formats.
  • Build using website content: If you need to ingest website content, just input a sitemap into the "Sitemap" tab. Use our free tools to find your sitemap or create a custom sitemap from various forms of web content (websites, helpdesks, Youtube videos, podcasts, RSS feeds, Google results, and more)
  • Create your first project [].

Email CustomGPT here.

AI courses

𝟭. Introduction to AI - IBM:
𝟮. AI Introduction by Harvard:
𝟯. Intro to Generative AI:
𝟰. Prompt Engineering Intro:
𝟱. Google's Ethical AI:

𝟲. Harvard Data Science & ML:
𝟳. ML with Python - IBM:
𝟴. Tensorflow Google Cloud:
𝟵. Structuring ML Projects:

𝟭𝟬. Prompt Engineering Pro:
𝟭𝟭. Advanced ML - Google:
𝟭𝟮. Advanced Algos - Stanford:

🎁 𝗕𝗼𝗻𝘂𝘀:
Amazon's AI Strategy:


Chain-of-thought (CoT) Prompting for LLMs

LLM: Hallucinations and Annotation Capabilities

Hallucinations in LLMs

  1. Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models [] [Zhang et al_2023]
  2. HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models [] [Li et al. 2023]
  3. Chain-of-Verification Reduces Hallucination in Large Language Models [Dhuliawala et al. 2023]
  4. MaxMin-RLHF: Towards Equitable Alignment of Large Language Models with Diverse Human Preferences [Chakraborty et al.]

Prompt-engineering for vision language models

  1. What does CLIP know about a red circle? Visual prompt engineering for VLMs [] [Shtedritski_2023]

Annotation Capabilities of Large Language Models

  1. Machine-assisted mixed methods: augmenting humanities and social sciences with artificial intelligence []
  2. Last Words: Empiricism Is Not a Matter of Faith []
  3. AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators []
  4. Is ChatGPT a Good Causal Reasoner? A Comprehensive Evaluation []
  5. Large Language Models for Data Annotation: A Survey []
  6. Just Ask for Calibration: Strategies for Eliciting Calibrated Confidence Scores from Language Models Fine-Tuned with Human Feedback
  7. ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks []
  8. LLMs Accelerate Annotation for Medical Information Extraction []
  9. Can Large Language Models Transform Computational Social Science? []
  10. LLMAAA: Making Large Language Models as Active Annotators []

Other areas to consider could be on how can we better evaluate annotations generated by LLMs, and how to best make use of such annotations along with human annotations, for better training of down-stream models

Scroll to Top