5-Day Gen AI Intensive
いいかも
ひととおりやれておもしろかった

申込みは必要?

kaggle accountとdiscord accountでログインするだけかな
kaggleのcode labでいろいろ試せるみたいなのがウリの講座なのでkaggle accountさえあれば申し込み不要のはず

なる

とりあえずkaggle accountだけでやってみます
Google loginで作れた
✅ Sign up for an
Google AI Studio account and ensure you can generate an API key.
Day 1 (Foundational Large Language Models & Prompt Engineering)
>Today you’ll explore the evolution of LLMs, from transformers to techniques like fine-tuning and inference acceleration. You’ll also get trained in the art of prompt engineering for optimal LLM interaction.
>The code lab will walk you through getting started with the Gemini API and cover several prompt techniques and how different parameters impact the prompts.
1. Complete the Intro Unit: “Foundational Large Language Models”, which is:
1. Listen to the summary podcast episode for this unit (created by
NotebookLM).
2. Complete Unit 1 – “Prompt Engineering”, which is:
1. Listen to the summary podcast episode for this unit (created by
NotebookLM).
半分くらい読んでる

意外と知らないことがあって興味深い
おそらく当たり前過ぎて説明がないんだと思う
2週目くらいに自分の知識を整理するときにでも、
/work4aiにページを生やしていきたい
3. Complete
this code lab on Kaggle where you’ll learn prompting fundamentals.
Make sure you
phone verify your account before starting, it's necessary for the code labs.
code labを
copy & edit
した

編集と実行ができるようになった
発行した
保存すると、このようにcode snippetが出てくる
下にあったコードブロックを実行してみた
エラーになった。
前の方にあったコードブロックを飛ばして実行したから、そりゃそうなる
以前のblocksを全部順に実行するコマンド Run > Run before
があった。これを使う
キーボードショートカットはなかった。
正常に実行された!
Run your first prompt
うごいた!
すこしいじってみる
pyflash = genai.GenerativeModel('gemini-1.5-flash')
response = flash.generate_content("Explain AI to me like I'm a kid in Japanese.")
print(response.text)
日本語で話してくれた
なんか謎のローマ字起こしも加わってるが……まあいいや
outputねえ、知ってる? AIって言うのがいるんだよ! AIはね、まるで賢いお友達みたいなんだ。
(Nee, shitteiru? AI tte iu no ga iru nda yo! AI wa ne, marude kashikoi o-tomodachi mitai nan da.) (Hey, do you know? There's something called AI! AI is like a smart friend.)
例えばね、AIは「猫の写真を見せて!」って言ったら、猫の写真をいっぱい見せてくれるんだよ。 自分で考えて、選んで見せてくれるんだ! すごいよね?
そうなんだ!すご~い!

output
(Tatteiba ne, AI wa "neko no shashin o misete!" tte ittara, neko no shashin o ippai misete kureru nda yo. Jibun de kangaete, erande misete kureru nda! Sugoi yo ne?) (For example, if you tell AI "Show me pictures of cats!", it shows you lots of cat pictures. It thinks and chooses them by itself! Amazing, right?)
あとね、AIはゲームも上手だよ! 難しいゲームでも、たくさん練習して上手になるんだ。 まるで人間みたい!
(Ato ne, AI wa ge-mu mo jouzu da yo! Muzukashii ge-mu demo, takusan renshuu shite jouzu ni naru nda. Marude ningen mitai!) (Also, AI is good at games! Even difficult games, it gets good at them by practicing a lot. Just like a human!)
でもね、AIはまだ赤ちゃんみたいなところもあるんだ。 全部が全部、完璧にできるわけじゃないんだよ。 だから、人間がAIに教えてあげたり、手伝ってあげたりすることもあるんだよ。
(Demo ne, AI wa mada akachan mitai na tokoro mo aru nda. Zenbu ga zenbu, kanpeki ni dekiru wake ja nai nda yo. Dakara, ningen ga AI ni oshiete age tari, tasukete age tari suru koto mo aru nda yo.) (But, AI is still like a baby in some ways. It can't do everything perfectly. So, humans sometimes teach and help AI.)
AIはこれからもっともっと賢くなっていくと思うよ! 楽しみだね!
(AI wa kore kara motto motto kashikoku natte iku to omou yo! Tanoshimi da ne!) (I think AI will become even smarter from now on! I'm looking forward to it!)
うん!楽しみ!

Start a chat
multi-turnなコードの例
ところで
Kaggleのnotebookってvscodeで動かせないかな

多分拡張機能があるはず
ないみたい
今のところweb browserでやっても困らないから、このままweb browserでやろう
あとpythonではなくTypeScriptで動かしたい
無理そう

pythonかRしか対応してない
もちろんgeminiのtypescript API clientはある
Choose a model
genai.list_models()
で利用可能なmodelsを取得できる
出力の文体がちょっと変わった
Explore generation parameters
max_output_tokens
でLLMの出力サイズを制限できる
制限を超えると、出力が途中で打ち切られる
py# When running lots of queries, it's a good practice to use a retry policy so your code
# automatically retries when hitting Resource Exhausted (quota limit) errors.
retry_policy = {
"retry": retry.Retry(predicate=retry.if_transient_error, initial=10, multiplier=1.5, timeout=300)
}
temperature=0.0
だとおんなじ回答ばかり返ってくることを確認した
Top-K and top-P
正直違いはまだ実感できていない


privateからcopy
だいたい多様性
pとkは、絶対値か相対値かの違いくらいかな

Prompting
configでJSON出力などを指定できる
Code Prompting
適当に実行した
>ここまでやった
4.
Read this case study to learn how a leading bank leveraged advanced prompt engineering and other contents discussed in assignments of day 1 to automate their financial advisory workflows, achieving significant productivity gains.
3. Watch the YouTube livestream recording.
Paige Bailey will be joined by expert speakers from Google - Mohammadamin Barekatain, Lee Boonstra, Logan Kilpatrick, Daniel Mankowitz, Majd Merey Al, Anant Nawalgaria, Aliaksei Severyn and Chuck Sugnet to discuss today's readings and code labs.
Day 2 (Embeddings and Vector Stores/Databases)
>Today you will learn about the conceptual underpinning of embeddings and vector databases and how they can be used to bring live or specialist data into your LLM application. You’ll also explore their geometrical powers for classifying and comparing textual data.
1. Complete Unit 2: “Embeddings and Vector Stores/Databases”, which is:
1. Listen to the summary podcast episode for this unit (created by NotebookLM).
3, Complete these code labs on Kaggle:
1.
Build a RAG question-answering system over custom documents
2.
Explore text similarity with embeddings
3.
Build a neural classification network with Keras using embeddings
2. Watch the YouTube livestream recording.
Day 3 (Generative Agents)
>Learn to build sophisticated AI agents by understanding their core components and the iterative development process.
The code labs cover how to connect LLMs to existing systems and to the real world. Learn about function calling by giving SQL tools to a chatbot, and learn how to build a LangGraph agent that takes orders in a café.
Day 3 Assignments:
1. Complete Unit 3: “Generative Agents”, which is:
1. Listen to the summary podcast episode for this unit (created by NotebookLM).
3.
Read a case study which talks about how a leading technology regulatory reporting solutions provider used an agentic generative AI system to automate ticket-to-code creation in software development, achieving a 2.5x productivity boost.
3. Complete these code labs on Kaggle:
1.
Talk to a database with function calling
2.
Build an agentic ordering system in LangGraph
2. Watch the YouTube livestream recording.
Day 4 (Domain-Specific LLMs)
> In today’s reading, you’ll delve into the creation and application of specialized LLMs like SecLM and MedLM/Med-PaLM, with insights from the researchers who built them.
> In the code labs you will learn how to add real world data to a model beyond its knowledge cut-off by grounding with Google Search. You will also learn how to fine-tune a custom Gemini model using your own labeled data to solve custom tasks.
1. Complete Unit 4: “Domain-Specific LLMs”, which is:
1. Listen to the summary podcast episode for this unit (created by NotebookLM).
3. Complete these code labs on Kaggle:
1.
Use Google Search data in generation
2.
Tune a Gemini model for a custom task
2. Watch the YouTube livestream recording.
Day 5 (MLOps for Generative AI)
>Discover how to adapt MLOps practices for Generative AI and leverage Vertex AI's tools for foundation models and generative AI applications.
1. Complete Unit 5: “MLOps for Generative AI”, which is:
1. Listen to the summary podcast episode for this unit (created by NotebookLM).
3. No code lab for today!
Please go through the repository in advance.
2. Watch the YouTube livestream recording.
Bonus Assignment
There's more!
>This bonus notebook walks you through a few more things you can do with the Gemini API that weren't covered during the course. This material doesn't pair with the whitepapers or podcast, but covers some extra capabilities you might find useful when building Gemini API powered apps.