Do the best you can until you know better. Then when you know better, do better. – Maya Angelou

Mr. Walker's Classroom Blog

  • Slide Show

    There’s a new quinoa restaurant in San Francisco — yes, quinoa restaurants are a thing in San Francisco, so that’s not what’s noteworthy. At this restaurant, customers order, pay and receive their food and never interact with a person.

    The restaurant, Eatsa, the first outlet in a company with national ambitions, is almost fully automated. There are no waiters or even an order taker behind a counter. There is no counter. There are unseen people helping to prepare the food, but there are plans to fully automate that process, too, if it can be done less expensively than employing people.

    Last week, I was in a fast-moving line and browsed on a flat-screen monitor the menu of eight quinoa bowls, each costing $6.95 (burrito bowl, bento bowl, balsamic beet). Then I approached an iPad, where I tapped in my order, customized it and paid. My name, taken from my credit card, appeared on another screen, and when my food was ready, a number showed up next to it.

    It corresponded to a cubby where my food would soon appear. The cubbies are behind transparent LCD screens that go black when the food is deposited, so no signs of human involvement are visible. With two taps of my finger, my cubby opened and my food was waiting.

    The quinoa — stir-fried, with arugula, parsnips and red curry — tasted quite good.

    Whether a restaurant that employs few people is good for the economy is another question. Restaurants, especially fast-food restaurants, have traditionally been a place where low-skilled workers can find employment. Most of the workers are not paid much, though in San Francisco employers of a certain size must pay health benefits and in 2018 a minimum wage of $15.

    “What percent of our currently human interactions are going to remain human as technology really advances?” said Andrew McAfee, co-founder of the M.I.T. Initiative on the Digital Economy and co-author of “The Second Machine Age.” “I think for a lot of the meals I’m going to want to eat out in five years, if I don’t deal with a person, that’s not going to be a net negative for me at all.”

    Eatsa is one more example of how rapidly machines have moved beyond routine jobs like clerical and manufacturing work to knowledge jobs and service jobs — like waiting tables. Economists disagree on whether technology will create more jobs than the ones it destroys, as has happened historically.

    Mr. Friedberg, a lifelong vegetarian and passionate apostle of quinoa, said opening a restaurant without people was not the point. Rather, it was to open a fast-food restaurant that aimed to be faster, tastier and less expensive. He and his team determined that automation would achieve that.

    Quinoa “is a much more efficient way to deliver protein to people than animal protein,” he said. He believes that changing consumers’ tastes is a way to change modern corporate agriculture, much of which is focused on feeding animals.

    “The objective is over time we want to automate more and more to increase speed and reduce cost, so we create a food product that’s much cheaper and also happens to be healthy,” he said.

    By not hiring people to work in the front of the restaurant, he said, they save money on payroll and real estate. (There will always be at least one person available to help people navigate the iPads and to clean up.) The kitchen is also automated, though he declined to reveal how, and the company is experimenting with how to further automate food preparation and delivery.

    Cutting costs in restaurants is nothing new, of course. Eatsa brings to mindautomats, the waiter-less restaurants that are a cross between a cafeteria and a vending machine. They are still found in Japan and some parts of Europe. (The last Horn & Hardart automat, in New York, closed in 1991.) Mr. Friedberg says Eatsa goes well beyond that, by using software and supply chain innovation to fundamentally change how a restaurant runs.

    He is firmly on the side of the optimists who think automation benefits the whole of society even if it hurts a few. “There’s rarely been a technology shift where people didn’t complain about technology replacing people’s jobs,” he said. “The reality is the economic growth from new technology has always resulted in new economic activity and job descriptions.

    “We can sit and debate all day what the implications are for low-wage workers at restaurants, but I don’t think that’s fair. If increased productivity means cost savings get passed to consumers, consumers are going to have a lot more to spend on lots of things.”

    Eatsa could also create new jobs, he said, like building automated machines and software systems ….

  • As we talk about Google Translate today in class, this article, quoted below, was published this morning.

    From the Daily Infographic

    Data on spoken languages across the entire globe is not that easy to come by and, as the author admits, some of the census data used is over eight years old. Projects like this are still interesting, even if they cant be flawlessly accurate. Going forward the growth of mobile devices and the internet will make gathering this sort of data easier, and having something to compare it to as far back as possible will be valuable.

    It’s important to note that the big circular graph is referring to “mother tongues.” If a person knows more than one language, this graph only represents their first language. A person born in Mexico who later became fluent in English would only count as a Spanish speaker. The graphs at the bottom, however, reveal the bi-tri-whatever lingual reality. We all know that English is a big deal, but dang, at 1,500 million learners we are crushing the competition. Take that, French.

    Perhaps you are inspired to learn a new language yourself? Check out this infographic guide to the easiest and hardest languages for speakers of European languages to learn.

    PlantillaVERTICAL

  • optimal

  • Read the article on NYTimes.com titled “Minecraft Stars on YouTube Share Secrets to Their Celebrity”

    Videos by Mitchell Hughes, a top Minecraft YouTuber, often consist of him and his friends exchanging jokes as they play survival games with other online players. CreditEve Edelheit for The New York Times

    Continue reading the main story

    Excerpts of the article

    YouTube videos about Minecraft are giant hits, even though the game’s blocky graphics don’t seem to scream excitement. Millions of people watch players narrate while they fly, hike and excavate Minecraft’s virtual world, which is akin to an open world digital Lego set. The Minecraft narrators – often men in their early 20s with effervescent personalities – act as solo tour guides as they build skyscrapers, ships and other structures or engage in battles of survival.

    ….

    YouTube, which is owned by Google, says Minecraft is the most popular game of all time on the site, ahead of Grand Theft Auto and Call of Duty, two major video game franchises. Last year, “Minecraft” was the second most searched term on YouTube, after “Frozen.” The popularity of the game explains why Microsoft paid $2.5 billion last year to acquire Mojang, the Swedish company that created Minecraft in 2009.

    “The amazing thing about using this software is you can produce an amazing video every day with big production values,” said Joseph Garrett, a master of the Minecraft YouTube genre who uses the handle Stampy. “If you were doing live action shows that could be done, but it wouldn’t be as easy.”

    Based on publicly available audience numbers and typical advertising rates, Peter Warman, an analyst with the market research firm Newzoo, estimates there are eight to 10 Minecraft YouTubers who earn over $1 million a year.

    To get a better grasp on what it takes to be a successful Minecraft YouTuber — and, by extension, better understand what makes the videos so popular….

    Read the full article

  • From InfoWorld

    Following our discussion in class, I would like to add this article to read covering what language we choose and why.  It often is more difficult than we might think at first glance.  And more information does not clear it up, as no answer is “right” or “wrong”.  Head over to InfoWorld and look at related articles and think about this.

    In the history of computing, 1995 was a crazy time. First Java appeared, then close on its heels came JavaScript. The names made them seem like conjoined twins newly detached, but they couldn’t be more different. One of them compiled and statically typed; the other interpreted and dynamically typed. That’s only the beginning of the technical differences between these two wildly distinct languages that have since shifted onto a collision course of sorts, thanks to Node.js.

    mobile smartphone hands user

    If you’re old enough to have been around back then, you might remember Java’s early, epic peak. It left the labs, and its hype meter pinned. Everyone saw it as a revolution that would stop at nothing less than a total takeover of computing. That prediction ended up being only partially correct. Today, Java dominates Android phones, enterprise computing, and some embedded worlds like Blu-ray disks.

    [ Streamline your development of fast websites, rich APIs, and real-time apps with these 13 fabulous frameworks for Node.js. | Keep up with hot topics in app dev with InfoWorld’s Strategic Developer blog and Application Development newsletter. ]

    For all its success, though, Java never established much traction on the desktop or in the browser. People touted the power of applets and Java-based tools, but gunk always glitched up these combinations. Servers became Java’s sweet spot.

    Meanwhile, what programmers initially mistook as the dumb twin has come into its own. Sure, JavaScript tagged along for a few years as HTML and the Web pulled a Borg on the world. But that changed with AJAX. Suddenly, the dumb twin had power.

    Then Node.js was spawned, turning developers’ heads with its speed. Not only was JavaScript faster on the server than anyone had expected, but it was often faster than Java and other options. Its steady diet of small, quick, endless requests for data have since made Node.js more common, as Web pages have grown more dynamic.

    While it may have been unthinkable 20 years ago, the quasi-twins are now locked in a battle for control of the programming world. On one side are the deep foundations of solid engineering and architecture. On the other side are simplicity and ubiquity. Will the old-school compiler-driven world of Java hold its ground, or will the speed and flexibility of Node.js help JavaScript continue to gobble up everything in its path?

    Where Java wins: Rock-solid foundation

    I can hear the developers laughing. Some may even be dying of heart failure. Yes, Java has glitches and bugs, but relatively speaking, it’s the Rock of Gibraltar. The same faith in Node.js is many years off. In fact, it may be decades before the JavaScript crew writes nearly as many regression tests as Sun/Oracle developed to test the Java Virtual Machine. When you boot up a JVM, you get 20 years of experience from a solid curator determined to dominate the enterprise server. When you start up JavaScript, you get the work of an often cantankerous coalition that sometimes wants to collaborate and sometimes wants to use the JavaScript standard to launch passive-aggressive attacks.

    Where Node wins: Ubiquity

    Thanks to Node.js, JavaScript finds a home on the server and in the browser. Code you write for one will more than likely run the same way on both. Nothing is guaranteed in life, but this is as close as it gets in the computer business. It’s much easier to stick with JavaScript for both sides of the client/server divide than it is to write something once in Java and again in JavaScript, which you would likely need to do if you decided to move business logic you wrote in Java for the server to the browser. Or maybe the boss will insist that the logic you built for the browser be moved to the server. In either direction, Node.js and JavaScript make it much easier to migrate code.

    Where Java wins: Better IDEs

    Java developers have Eclipse, NetBeans, or IntelliJ, three top-notch tools that are well-integrated with debuggers, decompilers, and servers. Each has years of development, dedicated users, and solid ecosystems filled with plug-ins.

    Meanwhile, most Node.js developers type words into the command line and code into their favorite text editor. Some use Eclipse or Visual Studio, both of which support Node.js. Of course, the surge of interest in Node.js means new tools are arriving, some of which, like IBM’s Node-RED offer intriguing approaches, but they’re still a long way from being as complete as Eclipse. WebStorm, for instance, is a solid commercial tool from JetBrains, linking in many command-line build tools.

    Of course, if you’re looking for an IDE that edits and juggles tools, the new tools that support Node.js are good enough. But if you ask your IDE to let you edit while you operate on the running source code like a heart surgeon slices open a chest, well, Java tools are much more powerful. It’s all there, and it’s all local.

    Where Node wins: Build process simplified by using same language

    Complicated build tools like Ant and Maven have revolutionized Java programming. But there’s only one issue. You write the specification in XML, a data format that wasn’t designed to support programming logic. Sure, it’s relatively easy to express branching with nested tags, but there’s still something annoying about switching gears from Java to XML merely to build something.

    Where Java wins: Remote debugging

    Java boasts incredible tools for monitoring clusters of machines. There are deep hooks into the JVM and elaborate profiling tools to help identify bottlenecks and failures. The Java enterprise stack runs some of the most sophisticated servers on the planet, and the companies that use those servers have demanded the very best in telemetry. All of these monitoring and debugging tools are quite mature and ready for you to deploy.

    Where Node wins: Database queries

    Queries for some of the newer databases, like CouchDB, are written in JavaScript. Mixing Node.js and CouchDB requires no gear-shifting, let alone any need to remember syntax differences.

    Meanwhile, many Java developers use SQL. Even when they use the Java DB (formerly Derby), a database written in Java for Java developers, they write their queries in SQL. You would think they would simply call Java methods, but you’d be wrong. You have to write your database code in SQL, then let Derby parse the SQL. It’s a nice language, but it’s completely different and many development teams need different people to write SQL and Java.

    Where Java wins: Libraries

    There is a huge collection of libraries available in Java, and they offer some of the most serious work around. Text indexing tools like Lucene and computer vision toolkits like OpenCV are two examples of great open source projects that are ready to be the foundation of a serious project. There are plenty of libraries written in JavaScript and some of them are amazing, but the depth and quality of the Java code base is superior.

    Where Node wins: JSON

    When databases spit out answers, Java goes to elaborate lengths to turn the results into Java objects. Developers will argue for hours about POJO mappings, Hibernate, and other tools. Configuring them can take hours or even days. Eventually, the Java code gets Java objects after all of the conversion.

    Many Web services and databases return data in JSON, a natural part of JavaScript. The format is now so common and useful that many Java developers use the JSON formats, so a number of good JSON parsers are available as Java libraries as well. But JSON is part of the foundation of JavaScript. You don’t need libraries. It’s all there and ready to go.

    Where Java wins: Solid engineering

    It’s a bit hard to quantify, but many of the complex packages for serious scientific work are written in Java because Java has strong mathematical foundations. Sun spent a long time sweating the details of the utility classes and it shows. There are BigIntegers, elaborate IO routines, and complex Date code with implementations of both Gregorian and Julian calendars.

    JavaScript is fine for simple tasks, but there’s plenty of confusion in the guts. One easy way to see this is in JavaScript’s three different results for functions that don’t have answers: undefined, NaN, and null. Which is right? Well, each has its role — one of which is to drive programmers nuts trying to keep them straight. Issues about the weirder corners of the language rarely cause problems for simple form work, but they don’t feel like a good foundation for complex mathematical and type work.

    Where Node wins: Speed

    People love to praise the speed of Node.js. The data comes in and the answers come out like lightning. Node.js doesn’t mess around with setting up separate threads with all of the locking headaches. There’s no overhead to slow down anything. You write simple code and Node.js takes the right step as quickly as possible.

    This praise comes with a caveat. Your Node.js code better be simple and it better work correctly. If it deadlocks, the entire server could lock up. Operating system developers have pulled their hair out creating safety nets that can withstand programming mistakes, but Node.js throws away these nets.

    Where Java wins: Threads

    Fast code is great, but it’s usually more important that it be correct. Here is where Java’s extra features make sense.

    Java’s Web servers are multithreaded. Creating multiple threads may take time and memory, but it pays off. If one thread deadlocks, the others continue. If one thread requires longer computation, the other threads aren’t starved for attention (usually).

    If one Node.js request runs too slowly, everything slows down. There’s only one thread in Node.js, and it will get to your event when it’s good and ready. It may look superfast, but underneath it uses the same architecture as a one-window post office in the week before Christmas.

    There have been decades of work devoted to building smart operating systems that can juggle many different processes at the same time. Why go back in time to the ’60s when computers could handle only one thread?

    Where Node wins: Momentum

    Yes, all of our grandparents’ lessons about thrift are true. Waste not; want not. It can be painful to watch Silicon Valley’s foolish devotion to the “new” and “disruptive,” but sometimes cleaning out the cruft makes the most sense. Yes, Java can keep up, but there’s old code everywhere. Sure, Java has new IO routines, but it also has old IO routines. Plenty of applet and util classes can get in the way.

    Where both win: Cross-compiling from one to the other

    The debate whether to use Java or Node.js on your servers can and will go on for years. As opposed to most debates, however, we can have it both ways. Java can be cross-compiled into JavaScript. Google does this frequently with Google Web Toolkit, and some of its most popular websites have Java code running in them — Java that was translated into JavaScript.

    There’s a path in the other direction, too. JavaScript engines like Rhino run JavaScript inside your Java application where you can link to it. If you’re really ambitious, you can link in Google’s V8 engine.

    Voilà. All of the code can link to each other harmoniously and you don’t need to choose.