Learning Timeline
Key Insights
Warning About AI Confidence (Hallucination)
AI models are trained to always appear confident because their training systems reward 'guessing'. Be cautious, as AI may validate incorrect information with a very convincing tone.
Tips for Reducing AI Hallucination
To get more accurate onboarding results, encourage the use of 'scoring' systems that give credit to the AI for answering 'I don't know' instead of letting it guess the answer.
Prompts
Technical Document Summary Prompt
Target:
NotebookLM
Summarize this technical paper into a clear, step-by-step onboarding guide. Focus on explaining why language models are trained to sound confident and how the training process rewards guessing over admitting uncertainty. Suggest how exam scoring can be modified to improve model accuracy.
Step by Step
Building an Onboarding Guide with NotebookLM
- Visit the NotebookLM website (notebooklm.google.com) and log in with your Google account.
- Click on the 'New Notebook' button to start a new project for your onboarding materials.
- Click the 'Add Source' button and select the technical document (such as a PDF or Google Docs) you want to convert into a guide.
- Wait for the AI to finish analyzing the document and automatically generate a 'Source Guide'.
- Type a 'Summarizer Prompt' in the chat box to extract key steps from the document.
- Review the summary options provided by the AI (for example, select 'Summary 2' if it is better organized).
- Click the 'Pin to Note' icon on the best AI response to save it as an official report or guide within the notebook.
Managing Data Permissions in Claude AI
- Open the Claude AI app or website.
- Navigate to the 'Permissions' section in your account settings.
- Grant access to 'Location', 'Calendar', or 'Reminders' if you want Claude to provide local restaurant recommendations or manage your schedule.
- Confirm the settings to allow the AI to access your real-time data.