Newsletter Issue 4
This newsletter is a deep dive on using the latest AI techniques for knowledge management and a tutorial on using Logseq for task management.
Summary and Reflection š¤
This newsletter is a deep dive on using the latest AI techniques for knowledge management and a tutorial on using Logseq for task management.
We are living in an exciting time for AI right now. Several new cutting-edge techniques now exist to search by āmeaningā and āconceptsā instead of just simple keywords. There are also new techniques for AI to generate new text and images instead of just analyzing them.
I also include some articles about Yann LeCun, head of AI at Meta (Facebook), who has many exciting ideas about the future of AI. This week he put out a research roadmap on what he thinks is a potential path forward to human-level artificial intelligence.
Updates š
notetaking-with-AI
See this guide to learn how you can use the latest AI techniques for personal knowledge management.
Productivity Toolkit š ļø
In this section, Iāll share a productivity tip Iāve learned recently.
logseq-tasks
In this guide, I write a basic tutorial on how to use Logseq for task management.
Many people use Logseq primarily as a note-taking tool, but I extensively use its task management capabilities.
The tasks determine what notes I write based on the projects Iām working on.
One of the most powerful ideas of Logseq is mixing your tasks throughout your pages and notes, then organizing and grouping them with queries.
Brain Food š§
In this section, Iāll share some interesting articles and āfood for thoughtā
Yann LeCun is the head of AI at Meta (Facebook) and one of the top AI researchers in the world.
Quote
LeCun believes that machines observing the world arenāt nearly enough for them to become intelligent. Real progress will happen when machines can take action in the real world and learn from the consequences of their actions, observing with the most high-fidelity inputs possible, like vision and sound.
Whatās missing (from AI) is a principle that would allow our machine to learn how the world works by observation and by interaction with the world. A learning predictive world model is what weāre missing today, and in my opinion is the biggest obstacle to significant progress in AI.ā
Link of the week
I highly recommend this article for hearing about his vision for the future of AI. A bold new vision for the future of AI
In 10 or 15 years people wonāt be carrying smartphones in their pockets, but augmented-reality glasses fitted with virtual assistants that will guide humans through their day. āFor those to be most useful to us, they basically have to have more or less human-level intelligenceā
His area of research is how to give machines ācommon senseā and create human-level artificial intelligence.
Common senseā is the catch-all term for this kind of intuitive reasoning. It includes a grasp of simple physics: for example, knowing that the world is three-dimensional and that objects donāt actually disappear when they go out of view. It lets us predict where a bouncing ball or a speeding bike will be in a few secondsā time.
I think one of his most interesting ideas is āGrounded Intelligenceā. He says that machines will never reach human-level intelligence by reading text alone and need much richer inputs from the real world.
There isnāt a text in the world that explains mundane fundamentals, like when you hear a metallic crash in the kitchen, it probably came from a pan falling.
His research area is focused on videos because many are on Facebook and Instagram, and the video format contains rich information about the world.
He trains machines to predict what will happen next by watching video clips.
He does this using a technique called āself-supervised learning,ā meaning helping machines learn independently, without any human intervention or needing a human to teach them.
A āself-supervisedā training process for videos looks like this
A machine will watch half a video
Then, it will try to predict what will happen next in the video
After making a prediction, it will watch the second half of the video to see if its prediction was correct.
Then, it improves itself based on if the prediction was correct.
Theyāre doing the same thing for voice. Why do you think your Google Home was only $25? Googleās AI is using your voice as training data. Itās listening to an audio snippet of what you say and seeing if it can predict what youāll say next.
Google Assistant - What technologies we use to train speech models
audio samples are collected and stored on Googleās servers.
A portion of these audio samples are annotated by human reviewers.
A training algorithm learns from annotated audio data samples.
In these interviews, LeCun hints at the connection between AI and robotics: that machines will really start learning fast when they are out in the real world, autonomously experiencing it, making decisions, and learning from mistakes.
This idea reminds me of the future in Westworld, where the robots are almost indistinguishable from human beings, then āawakeā and become conscious while interacting with the humans in āhigh-fidelityā ways.
Take a look at these interviews and his new paper for more about Yann LeCun and his vision for the future of AI.
Yann LeCun Lex Friedman Podcast 1
Yann LeCun Lex Friedman Podcast 2
A Path Towards Autonomous Machine Intelligence
Analytics š
I canāt believe the newsletter has already grown to over 100 subscribers!
Itās doubling almost every week, going from 10 -> 30 -> 60 -> 120 -> ??
That is already way more people than I was expecting. Knowing even a few people are looking at this motivates me to continue creating and posting high quality notes.
Outro
I hope you enjoyed this weekās newsletter.
Next week, weāll continue with more Logseq guides, like how to manage projects.
Iāll also get started on my data structures and algorithms guide with an intro. In future issues, weāll build up this guide on algorithms in great detail. Hopefully, this will help others learn algorithms and showcase my approach to note-taking.
Check out the newsletter-roadmap to see what I have in mind for future issues. Let me know on twitter @bsunter