Brian Sunter

Newsletter Issue 4

This newsletter is a deep dive on using the latest AI techniques for knowledge management and a tutorial on using Logseq for task management.

Summary and Reflection šŸ¤”

This newsletter is a deep dive on using the latest AI techniques for knowledge management and a tutorial on using Logseq for task management.

We are living in an exciting time for AI right now. Several new cutting-edge techniques now exist to search by ā€œmeaningā€ and ā€œconceptsā€ instead of just simple keywords. There are also new techniques for AI to generate new text and images instead of just analyzing them.

I also include some articles about Yann LeCun, head of AI at Meta (Facebook), who has many exciting ideas about the future of AI. This week he put out a research roadmap on what he thinks is a potential path forward to human-level artificial intelligence.

Updates šŸ†•

notetaking-with-AI

See this guide to learn how you can use the latest AI techniques for personal knowledge management.

Productivity Toolkit šŸ› ļø

In this section, Iā€™ll share a productivity tip Iā€™ve learned recently.

logseq-tasks

In this guide, I write a basic tutorial on how to use Logseq for task management.

Many people use Logseq primarily as a note-taking tool, but I extensively use its task management capabilities.

The tasks determine what notes I write based on the projects Iā€™m working on.

One of the most powerful ideas of Logseq is mixing your tasks throughout your pages and notes, then organizing and grouping them with queries.

Brain Food šŸ§ 

In this section, Iā€™ll share some interesting articles and ā€œfood for thoughtā€

Yann LeCun is the head of AI at Meta (Facebook) and one of the top AI researchers in the world.

Quote

LeCun believes that machines observing the world arenā€™t nearly enough for them to become intelligent. Real progress will happen when machines can take action in the real world and learn from the consequences of their actions, observing with the most high-fidelity inputs possible, like vision and sound.

Whatā€™s missing (from AI) is a principle that would allow our machine to learn how the world works by observation and by interaction with the world. A learning predictive world model is what weā€™re missing today, and in my opinion is the biggest obstacle to significant progress in AI.ā€

I highly recommend this article for hearing about his vision for the future of AI. A bold new vision for the future of AI

In 10 or 15 years people wonā€™t be carrying smartphones in their pockets, but augmented-reality glasses fitted with virtual assistants that will guide humans through their day. ā€œFor those to be most useful to us, they basically have to have more or less human-level intelligenceā€

His area of research is how to give machines ā€œcommon senseā€ and create human-level artificial intelligence.

Common senseā€ is the catch-all term for this kind of intuitive reasoning. It includes a grasp of simple physics: for example, knowing that the world is three-dimensional and that objects donā€™t actually disappear when they go out of view. It lets us predict where a bouncing ball or a speeding bike will be in a few secondsā€™ time.

I think one of his most interesting ideas is ā€œGrounded Intelligenceā€. He says that machines will never reach human-level intelligence by reading text alone and need much richer inputs from the real world.

There isnā€™t a text in the world that explains mundane fundamentals, like when you hear a metallic crash in the kitchen, it probably came from a pan falling.

His research area is focused on videos because many are on Facebook and Instagram, and the video format contains rich information about the world.

He trains machines to predict what will happen next by watching video clips.

He does this using a technique called ā€œself-supervised learning,ā€ meaning helping machines learn independently, without any human intervention or needing a human to teach them.

A ā€œself-supervisedā€ training process for videos looks like this

A machine will watch half a video

Then, it will try to predict what will happen next in the video

After making a prediction, it will watch the second half of the video to see if its prediction was correct.

Then, it improves itself based on if the prediction was correct.

Theyā€™re doing the same thing for voice. Why do you think your Google Home was only $25? Googleā€™s AI is using your voice as training data. Itā€™s listening to an audio snippet of what you say and seeing if it can predict what youā€™ll say next.

Google Assistant - What technologies we use to train speech models

audio samples are collected and stored on Googleā€™s servers.

A portion of these audio samples are annotated by human reviewers.

A training algorithm learns from annotated audio data samples.

In these interviews, LeCun hints at the connection between AI and robotics: that machines will really start learning fast when they are out in the real world, autonomously experiencing it, making decisions, and learning from mistakes.

This idea reminds me of the future in Westworld, where the robots are almost indistinguishable from human beings, then ā€œawakeā€ and become conscious while interacting with the humans in ā€œhigh-fidelityā€ ways.

Take a look at these interviews and his new paper for more about Yann LeCun and his vision for the future of AI.

Yann LeCun Lex Friedman Podcast 1

Yann LeCun Lex Friedman Podcast 2

A Path Towards Autonomous Machine Intelligence

Analytics šŸ“ˆ

I canā€™t believe the newsletter has already grown to over 100 subscribers!

Itā€™s doubling almost every week, going from 10 -> 30 -> 60 -> 120 -> ??

That is already way more people than I was expecting. Knowing even a few people are looking at this motivates me to continue creating and posting high quality notes.

Outro

I hope you enjoyed this weekā€™s newsletter.

Next week, weā€™ll continue with more Logseq guides, like how to manage projects.

Iā€™ll also get started on my data structures and algorithms guide with an intro. In future issues, weā€™ll build up this guide on algorithms in great detail. Hopefully, this will help others learn algorithms and showcase my approach to note-taking.

Check out the newsletter-roadmap to see what I have in mind for future issues. Let me know on twitter @bsunter

Share this post