Do the best you can until you know better. Then when you know better, do better. – Maya Angelou

Mr. Walker's Classroom Blog

  • Big Deal, you learned to keyboard, but what really happens when you start to work?

    Chrome Extension Typing Speed Monitor shows you just that.

    So if you just read the article on Smashing Apps, 8 Online Apps That Help You Improve Your Typing Speed, find out how well you ACTUALLY did.

    Gives you statistics on how fast you type, what you type, and how often you type.

    This extension records how fast you type while you are actively typing. Active typing is the time you spend typing and does not include any breaks longer than 5 seconds that occur between keystrokes.

  • Shouldn’t life be more like a game?  Head over to HighSchoreHouse and setup your house for points!

    Meet the team below.  Makes you wonder about the next game you build, doesn’t it?

    HighScoreHouse

  • If you love building with LEGO® bricks, there’s now a whole new way to build – online in Google Chrome. Welcome to Build. Think of it as the world’s biggest LEGO set. So what will you build? www.buildwithchrome.com

    From Google:

    We love building with LEGO bricks. We loved it as kids, and we still love it now. Visit any Google office, you’re going to see LEGO bricks all over the place.

    So it’s with childish delight that today we can announce Build. Over the last few months we’ve been working with LEGO Australia, thinking about what would happen if we brought bricks to the browser. Build is the result: our latest Chrome Experiment which lets you explore and build a new world of LEGO creations together online. With 8 trillion bricks, think of Build as the largest LEGO set you’ve ever seen.

    Build may look simple, but this collaborative 3D building experience would not have been possible a couple of years ago. It shows how far browser technology has come and how the web is an amazing platform for creativity. We made the bricks with WebGL, which enables powerful 3D graphics right in the browser and demonstrates the upper limit of current WebGL graphics performance.  We then mixed in Google Maps (another Aussie invention) so you can put your creation in a LEGO world alongside everyone else’s.

    Right now Build is an experiment we’ve been working on in Sydney. We’re launching first in Australia and New Zealand and hope to open up in other countries soon. This year is the 50th anniversary of the LEGO brick in Australia and Build joins the celebration of the LEGO Festival of Play online.
    Over the next few weeks and months we hope to see you fill the Build world up with everything from medieval castles to sea snakes, giant mouse cursors to smiling monsters and even a Kiwi! Share your creations with us on +Google and we’ll re-post the most inventive.

    You can start building at buildwithchrome.com.

    Posted on Google by Lockey McGrath, Product Marketing Manager, Google Australia and New Zealand

  • I know, I know, more of MrWalker and his never ending quest to demonstrate that Cats and not people are the most valuable “product” on the Internet.  Catbook is going to take over Facebook, you wait and see.  Still, this is an interesting article on getting the network to think.  Read the entire article and related articles, videos, comment on NYTimes. 

    The article is included here simply so that it doesn’t become hard to find when we get back at school and I need it for a Quiz on the Future of Things.  READ THE ORIGINAL

     

    CATS-articleLarge

     

    By JOHN MARKOFF
    Published: June 25, 2012

    MOUNTAIN VIEW, Calif. — Inside Google’s secretive X laboratory, known for inventing self-driving cars and augmented reality glasses, a small group of researchers began working several years ago on a simulation of the human brain.

    Enlarge This Image

    Jim Wilson/The New York Times

    Andrew Y. Ng, a Stanford computer scientist, is cautiously optimistic about neural networks.

    There Google scientists created one of the largest neural networks for machine learning by connecting 16,000 computer processors, which they turned loose on the Internet to learn on its own.

    Presented with 10 million digital images found in YouTube videos, what did Google’s brain do? What millions of humans do with YouTube: looked for cats.

    The neural network taught itself to recognize cats, which is actually no frivolous activity. This week the researchers will present the results of their work at a conference in Edinburgh, Scotland. The Google scientists and programmers will note that while it is hardly news that the Internet is full of cat videos, the simulation nevertheless surprised them. It performed far better than any previous effort by roughly doubling its accuracy in recognizing objects in a challenging list of 20,000 distinct items.

    The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers. It is leading to significant advances in areas as diverse as machine vision and perception, speech recognition and language translation.

    Although some of the computer science ideas that the researchers are using are not new, the sheer scale of the software simulations is leading to learning systems that were not previously possible. And Google researchers are not alone in exploiting the techniques, which are referred to as “deep learning” models. Last year Microsoft scientists presented research showing that the techniques could be applied equally well to build computer systems to understand human speech.

    “This is the hottest thing in the speech recognition field these days,” said Yann LeCun, a computer scientist who specializes in machine learning at the Courant Institute of Mathematical Sciences at New York University.

    And then, of course, there are the cats.

    To find them, the Google research team, led by the Stanford University computer scientist Andrew Y. Ng and the Google fellow Jeff Dean, used an array of 16,000 processors to create a neural network with more than one billion connections. They then fed it random thumbnails of images, one each extracted from 10 million YouTube videos.

    The videos were selected randomly and that in itself is an interesting comment on what interests humans in the Internet age. However, the research is also striking. That is because the software-based neural network created by the researchers appeared to closely mirror theories developed by biologists that suggest individual neurons are trained inside the brain to detect significant objects.

    Currently much commercial machine vision technology is done by having humans “supervise” the learning process by labeling specific features. In the Google research, the machine was given no help in identifying features.

    “The idea is that instead of having teams of researchers trying to find out how to find edges, you instead throw a ton of data at the algorithm and you let the data speak and have the software automatically learn from the data,” Dr. Ng said.

    “We never told it during the training, ‘This is a cat,’ ” said Dr. Dean, who originally helped Google design the software that lets it easily break programs into many tasks that can be computed simultaneously. “It basically invented the concept of a cat. We probably have other ones that are side views of cats.”

    The Google brain assembled a dreamlike digital image of a cat by employing a hierarchy of memory locations to successively cull out general features after being exposed to millions of images. The scientists said, however, that it appeared they had developed a cybernetic cousin to what takes place in the brain’s visual cortex.

    Neuroscientists have discussed the possibility of what they call the “grandmother neuron,” specialized cells in the brain that fire when they are exposed repeatedly or “trained” to recognize a particular face of an individual.

    “You learn to identify a friend through repetition,” said Gary Bradski, a neuroscientist at Industrial Perception, in Palo Alto, Calif.

    While the scientists were struck by the parallel emergence of the cat images, as well as human faces and body parts in specific memory regions of their computer model, Dr. Ng said he was cautious about drawing parallels between his software system and biological life.

    “A loose and frankly awful analogy is that our numerical parameters correspond to synapses,” said Dr. Ng. He noted that one difference was that despite the immense computing capacity that the scientists used, it was still dwarfed by the number of connections found in the brain.

    “It is worth noting that our network is still tiny compared to the human visual cortex, which is a million times larger in terms of the number of neurons and synapses,” the researchers wrote.

    Despite being dwarfed by the immense scale of biological brains, the Google research provides new evidence that existing machine learning algorithms improve greatly as the machines are given access to large pools of data.

    “The Stanford/Google paper pushes the envelope on the size and scale of neural networks by an order of magnitude over previous efforts,” said David A. Bader, executive director of high-performance computing at the Georgia Tech College of Computing. He said that rapid increases in computer technology would close the gap within a relatively short period of time: “The scale of modeling the full human visual cortex may be within reach before the end of the decade.”

    Google scientists said that the research project had now moved out of the Google X laboratory and was being pursued in the division that houses the company’s search business and related services. Potential applications include improvements to image search, speech recognition and machine language translation.

    Despite their success, the Google researchers remained cautious about whether they had hit upon the holy grail of machines that can teach themselves.

    “It’d be fantastic if it turns out that all we need to do is take current algorithms and run them bigger, but my gut feeling is that we still don’t quite have the right algorithm yet,” said Dr. Ng.

  • 0714_brain-idea_300x3003From Forbes

    To test your mental acuity, answer the following questions (no peeking at the answers!):

    1. Johnny’s mother had three children. The first child was named April. The second child was named May. What was the third child’s name?

    2. A clerk at a butcher shop stands five feet ten inches tall and wears size 13 sneakers. What does he weigh?

    3. Before Mt. Everest was discovered, what was the highest mountain in the world?

    4. How much dirt is there in a hole that measures two feet by three feet by four feet?

    5. What word in the English language is always spelled incorrectly?

    6. Billie was born on December 28th, yet her birthday always falls in the summer. How is this possible?

    7. In British Columbia you cannot take a picture of a man with a wooden leg. Why not?

    8. If you were running a race and you passed the person in 2nd place, what place would you be in now?

    9. Which is correct to say, “The yolk of the egg is white” or “The yolk of the egg are white?”

    10. A farmer has five haystacks in one field and four haystacks in another. How many haystacks would he have if he combined them all in one field?

    (more…)