Making an Impact

It seems like, in the end, everyone wants to have control over their environment. People see problems all around themselves, and wish that they could fix them, wish that they could have an impact on the world around them. They wish for agency. They wish not to be a thermometer, feeling the changes in temperature but powerless to do anything, but to be a thermostat, armed with the skills and materials to change the world and bring it to a reasonable temperature. This is why people talk all about empowerment, and allowing people to effect change.

“I really hate hot weather”

With computers and AI, however, there is another obstacle even more difficult to overcome than a lack of agency. Indeed, it is easy to make a robot that can manipulate its environment. Roombas change their environment all the time. The difficulty is in making robots want to change their environment. A thermostat has all the power in the world to change temperature in its puny domain, but it doesn’t care what it does. It simply follows a human’s orders. The problem is that robots and machines lack autonomy, the ability to make decisions for themselves. With progress in AI, for instance the use of utility theory, we are slowly giving machines ways of making decisions on their own, but it will be an extremely hard, possibly impossible,  puzzle to solve.

“Hahahahaha! I have much power! I can clean all the floors!”

The Ice Cold Killer


I wrote a program to analyze a collection of stories of the brothers Grimm and to build a Markov Chain from that information. I call the resulting nonsensical story The Ice Cold Killer

d children, they were going, and got
ready a poisoned apple: the outer chamber
alone, wish for a pretty yellow buttons were, the kingdom for his master said: ‘Listen, my friend, though
he may be of shining through the gardener for
all the people said ‘No,’ unless you meet with the ring on in daily hope of finding
it. After a time the ale cask. The bean, ‘that is the matter?

One evening she would, she could no longer, and there, the people; ‘I acknowledge.

Then the garden stood still; the butter and cried she, ‘I’ll carry you into it, and lay it down.’ ‘Pray
don’t,’ answered. Then thoughts;

Continue reading

Final Project: Feedforward Neural Networks

A neural network is a series of nodes, or artificial neurons, connected to each other in such a way that they can take an input signal, and convert it to output by passing it through the network. Neural networks are useful in pattern recognition, as the creator does not need to write algorithms to recognize every pattern; the network learns how to recognize the patterns by going through many examples. In this way a neural network can find patterns and make predictions that the programmer knew nothing about.
The purpose of this project is to better explain how exactly neural networks function, and to create a simple example of a neural network that performs the function of an XOR gate.

Neural networks were first imagined in 1943 by Warren McCulloch and Walter Pitts. Since then, they have progressed greatly, with the most noteworthy advancement being the creation of the backpropagation algorithm in 1974. While critics say that neural networks are inefficient, as they take up a great amount of resources to store all of the neurons and connections, and take a lot of processing power to send signals across the connections, they have proved very useful in making predictions, as they can find patterns by themselves given training data.


Various researchers have made great progress using neural networks in the field of pattern recognition. Neural networks were the first artificial systems to achieve a level of pattern recognition as good as human recognition, and now there are even some systems with better-than-human pattern recognition. Alex Graves et al. created a neural network that was able to learn to recognize handwriting in 3 different languages. At the International Joint Conference on Neural Networks in 2012, neural networks were able to compete with humans in recognizing traffic signs. Neural networks are useful in the field of deep learning.

Previous Work

A lot of work has been done recently with neural networks. Notably, Alex Graves and Jurgen Schmidhuber created a Multi-Dimensional Recurrent Neural Network (MDRNN), which can recognize handwriting one character at a time, and is capable of being trained in any language. Their network also uses memory storage to contain information about the network’s function. The network was successfully able to learn to recognize Arabic when tested with handwritten Tunisian post codes.

Also, Gert Westermann of the University of Edinburgh led a team that made a network to conjugate English past-tense verbs. According to the report:

After 912 epochs, the network produced 100% of the
irregular and 99.8% (all but two) of the regular past
tense forms correctly.

These networks are much more complicated and sophisticated than the one proposed in this project. They are simply meant as a demonstration of the power of neural networks.

Current Problems in the Area

Currently, a big problem and source of criticism in the field of neural networks is the amount of resources that a neural network uses up. The brain has evolved to perform computations in this manner, but computers were not designed to do so, and thus neural networks lead to a sub-optimal allocation of resources.

Also, neural networks are looked down upon as being black boxes. A neural network can be trained to behave in any way, but it is hard to alter its behavior without creating thousands of further testing scenarios, and the weights that the network ends up with at the end do not help researchers better understand the workings of the network. They may as well be random numbers.

Proposed Solutions

Given the rapid increase in computer power that is constantly going on, the first objection is not very strong. Even if neural networks do not use resources optimally, their useful function makes up for this failure, and as computing power increases, so does the power of neural networks. It would be an interesting experiment to try to create a computer architecture more suited to neural networks, with physical connections between nodes, but it is not a necessary project for the field to advance. Also, work into biological neural networks could eventual lead to an optimized solution.

The second problem is not so easy to brush aside. It is a significant flaw that neural networks cannot be easily adjusted. One way to find out what parts of a network would have to be adjusted would be to run through one trail with a new scenario in which the desired behavior was required and keep track of which nodes had their weights changed most and then change those weights by hand. Of course, this also has the problem that it might affect the outputs from other scenarios, which is probably why it is not done.


Neural networks are an interesting field very different from other approaches to AI. This has its benefits, like their usefulness in pattern recognition, but it also has drawbacks. Neural networks are an interesting roundabout way of solving problems in AI like modelling and prediction, and while they may not be the most efficient way to solve problems, they are fascinating in themselves.

Future Work

In future, I propose constructing a more sophisticated neural network that moves beyond being a simple proof of concept and actually performs a useful function, like predicting weather.


A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, J. Schmidhuber. A Novel Connectionist System for Improved Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, 2009.

D. C. Ciresan, U. Meier, J. Masci, J. Schmidhuber>. Multi-Column Deep Neural Network for Traffic Sign Classification. Neural Networks, 2012.

Graves, Alex, and Jurgen Schmidhuber. “Offine Handwriting Recognition with
Multidimensional Recurrent Neural Networks.” 2009.

McCulloch, Warren; Walter Pitts (1943). “A Logical Calculus of Ideas Immanent in     Nervous Activity”.Bulletin of Mathematical Biophysics

Werbos, P.J. (1975). Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences

Conference Which I Would Like To Attend

Given my research into neural networks, I would like to attend the International Joint Conference on Neural Networks (IJCNN). In June 10-15 2012, it was held in Brisbane, Australia. One interesting topic discussed at the conference was the use of neural networks in teaching computers to play Go. Go is an interesting problem for computers because it is much more complicated than chess, and while computers can do well on small boards, they have a hard time thinking strategically on the entire board.

kNN Classification


This is the result of my k Nearest Neighbors classification function on the data set that I was given. It took points and using their proximity to the blue and red points, classified them as either magenta or pink (magenta if they were near more blue and pink if they were near more red). The function classified 86/100 points correctly with a k value of 3.

This problem could be done with Naive Bayes by putting the features into groups (e.g. -5 to -4, -4 to -3, and so on). Then, the probability of features within a range given label 0 or 1 could be computed, so new points could be classified. It wouldn’t perform that well, because the features would be less exact. Naive Bayes is better with categorical data, whereas kNN works with quantitative data.

My program got worse with the higher dimensional data. It could only classify about 55-60 of the data points correctly. It seems that in higher dimensions, there are more points near any given point, so the chances of classifying it correctly decrease.

kNN won’t work very well if the features of the two types are similar, or on data sets with many features.

I prefer kNN. It seems reasonable that things of the same type share features.

While the government exists as a compromise between safety and liberty, that does not mean that citizens deserve no liberty as the price of their safety. Part of that liberty is privacy, and it is absolutely wrong of the government to take that privacy from citizens. People should not be automatically treated as suspects, watched in case they commit a crime.

I don’t mind personal cameras like Google Glass or cell phone cameras; people can take all of the videos they want. If those videos can be used to establish the guilt or innocence of someone on trial, all the better. But no organization should have that much information about everyone.

New York Times Article Concerning Surveilance

My Voice

I’m not really interested in my voice. I don’t want to be the spokesman of the world; I’d prefer to be the person doing science and building cool stuff. Or teaching. That would be nice too. So I can’t really decide between being the scientist who discovers things, the engineer who uses the things the scientist discovered to improve people’s material lives, the cutting-edge artist who uses things the scientist learned to make cool art and enrich people’s lives, or the teacher who discovers new things and teaches them to students. All those jobs would be pretty fun. I think science or theoretical computer science might be the most fun. Right now I am none of those, but I am closest to being an engineer because I write code based on things other people have discovered. That’s pretty fun too.

I live in Westfield, New Jersey.

Yay Lisp

Here are some simple Lisp functions:
(defun multiply(X Y)
(* X Y))

(defun divide(X Y)
(/ X Y))

(defun add(X Y)
(+ X Y))

(defun subtract(X Y)
(- X Y))

(defun factorial(X)
(if(< X 2)
(* X (factorial(- X 1)))))

Lisp is a functional programming language, unlike Java which is imperative. Imperative programming languages give the computer a series of steps, while functional languages describe what the end goal of something is. Lisp works a lot with lists. Declarative programming expresses everything in logic and asks the computer to find cases which satisfy all of the conditions.

Lisp was created in 1958 by John McCarthy at MIT. It is good at recursion, and at prototyping problems that you don't know how to solve, so it was very useful for AI programmers.

Final Project Proposal: Neural Networks

  1. What is the specific area you are working within? I will be working with neural networks.
  2. What is it that you would like to learn about it? I want to understand how the math behind the networks works.
  3. Why is this interesting to you? The idea of simulating the brain is really cool, and the math looks interesting.
  4. Why should other people care? Neural networks are good at pattern recognition, so they could find trends to make predictions about the economy, and could also help facial recognition systems.
  5. What are the practical applications of this topic? Neural networks can be used in industry to predict sales, and can be used to read text and recognize faces.
  6. What is the closest example (researched from the web) of what you want to build or investigate?  This is an example of a neural network that was taught to find the square roots of numbers. 12 just has 12 neurons, so it seems reasonable.
  7. What do I want to accomplish? I want to learn about the math and ideas behind neural networks by working on a simple network of a few nodes.
  8. My order set of steps I plan to take to accomplish it? I will read a lot about neural networks, and make classes for neurons, layers of neurons, and the network as a whole in that order.
  9. Foreseeable obstacles? I don’t actually know all the math, and it will probably take a lot of code to make even a simple neural network. It will also get really complex.
  10. What tools am I using? I will probably program in eclipse, and I will read about neural networks online.