Professor of Electrical Engineering and Computer Science at MIT Tommi Jaakkola says, “If you had a very small neural network, you might be able to understand it. But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.” We are at the stage of these large systems now. So, in order to make these machines explain themselves — an issue that will have to be solved before we can place any trust in them — what methods are we using?

Source: We Still Know Very Little About How AI Thinks

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s