Brian Sunter

Newsletter Issue 4

This newsletter is a deep dive on using the latest AI techniques for knowledge management and a tutorial on using Logseq for task management.

#newsletter #ai #logseq #logseq-openai/project

Summary and Reflection 🤔

This newsletter is a deep dive on using the latest AI techniques for knowledge management and a tutorial on using Logseq for task management.

We are living in an exciting time for AI right now. Several new cutting-edge techniques now exist to search by “meaning” and “concepts” instead of just simple keywords. There are also new techniques for AI to generate new text and images instead of just analyzing them.

I also include some articles about Yann LeCun, head of AI at Meta (Facebook), who has many exciting ideas about the future of AI. This week he put out a research roadmap on what he thinks is a potential path forward to human-level artificial intelligence.

Updates 🆕

notetaking-with-AI

See this guide to learn how you can use the latest AI techniques for personal knowledge management.

Productivity Toolkit 🛠️

In this section, I’ll share a productivity tip I’ve learned recently.

logseq-tasks

In this guide, I write a basic tutorial on how to use Logseq for task management.

Many people use Logseq primarily as a note-taking tool, but I extensively use its task management capabilities.

The tasks determine what notes I write based on the projects I’m working on.

One of the most powerful ideas of Logseq is mixing your tasks throughout your pages and notes, then organizing and grouping them with queries.

Brain Food 🧠

In this section, I’ll share some interesting articles and “food for thought”

Yann LeCun is the head of AI at Meta (Facebook) and one of the top AI researchers in the world.

Quote

LeCun believes that machines observing the world aren’t nearly enough for them to become intelligent. Real progress will happen when machines can take action in the real world and learn from the consequences of their actions, observing with the most high-fidelity inputs possible, like vision and sound.

What’s missing (from AI) is a principle that would allow our machine to learn how the world works by observation and by interaction with the world. A learning predictive world model is what we’re missing today, and in my opinion is the biggest obstacle to significant progress in AI.”

I highly recommend this article for hearing about his vision for the future of AI. A bold new vision for the future of AI

In 10 or 15 years people won’t be carrying smartphones in their pockets, but augmented-reality glasses fitted with virtual assistants that will guide humans through their day. “For those to be most useful to us, they basically have to have more or less human-level intelligence”

His area of research is how to give machines “common sense” and create human-level artificial intelligence.

Common sense” is the catch-all term for this kind of intuitive reasoning. It includes a grasp of simple physics: for example, knowing that the world is three-dimensional and that objects don’t actually disappear when they go out of view. It lets us predict where a bouncing ball or a speeding bike will be in a few seconds’ time.

I think one of his most interesting ideas is “Grounded Intelligence”. He says that machines will never reach human-level intelligence by reading text alone and need much richer inputs from the real world.

There isn’t a text in the world that explains mundane fundamentals, like when you hear a metallic crash in the kitchen, it probably came from a pan falling.

His research area is focused on videos because many are on Facebook and Instagram, and the video format contains rich information about the world.

He trains machines to predict what will happen next by watching video clips.

He does this using a technique called “self-supervised learning,” meaning helping machines learn independently, without any human intervention or needing a human to teach them.

A “self-supervised” training process for videos looks like this

A machine will watch half a video

Then, it will try to predict what will happen next in the video

After making a prediction, it will watch the second half of the video to see if its prediction was correct.

Then, it improves itself based on if the prediction was correct.

They’re doing the same thing for voice. Why do you think your Google Home was only $25? Google’s AI is using your voice as training data. It’s listening to an audio snippet of what you say and seeing if it can predict what you’ll say next.

Google Assistant - What technologies we use to train speech models

audio samples are collected and stored on Google’s servers.

A portion of these audio samples are annotated by human reviewers.

A training algorithm learns from annotated audio data samples.

In these interviews, LeCun hints at the connection between AI and robotics: that machines will really start learning fast when they are out in the real world, autonomously experiencing it, making decisions, and learning from mistakes.

This idea reminds me of the future in Westworld, where the robots are almost indistinguishable from human beings, then “awake” and become conscious while interacting with the humans in “high-fidelity” ways.

Take a look at these interviews and his new paper for more about Yann LeCun and his vision for the future of AI.

Yann LeCun Lex Friedman Podcast 1

Yann LeCun Lex Friedman Podcast 2

A Path Towards Autonomous Machine Intelligence

Analytics 📈

I can’t believe the newsletter has already grown to over 100 subscribers!

It’s doubling almost every week, going from 10 -> 30 -> 60 -> 120 -> ??

That is already way more people than I was expecting. Knowing even a few people are looking at this motivates me to continue creating and posting high quality notes.

Outro

I hope you enjoyed this week’s newsletter.

Next week, we’ll continue with more Logseq guides, like how to manage projects.

I’ll also get started on my data structures and algorithms guide with an intro. In future issues, we’ll build up this guide on algorithms in great detail. Hopefully, this will help others learn algorithms and showcase my approach to note-taking.

Check out the newsletter-roadmap to see what I have in mind for future issues. Let me know on twitter @bsunter

Share this article