How Much Do You Know About Mechanistic Interpretability (AI)?

in ai •  yesterday 

The way neural networks work is a fascinating subject. The more I learn, the more I realize how little everyone knows about "AI". I have seen many entrepreneurs and even highly educated scientists make many ignorant statements regarding "AI".

The reason I specifically brought up Mechanistic Interpretability is its ability to peak into the inner workings of an "AI". I used to think of them as black boxes with extremely complicated inner workings. It turns out this complexity can be somewhat managed. Currently these techniques work on a single layer. Eventually we could see a multi layered analysis that could unlock previously unexpected features out of the "AI" we build.

  • I Am An Expert
  • I Have Intermediate Knowledge
  • I Have A Basic Understanding
  • I Don't Know Anything About Mechanistic Interpretability
Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!