Aprende A Jugar Al Poker Con Los Pelayos Pdf Files

Aprende A Jugar Al Poker Con Los Pelayos Pdf Files Average ratng: 9,2/10 5908reviews

Download Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas PDF what you can after reading Download Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas PDF over all? Actually, as a reader, you can get a lot of life lessons after reading this book. Because this Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas PDF Download teaches people to live in harmony and peace. To serve more readers get the book Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas ePub, this site for free made for you. And this site provides other books in various genres. So, by visiting this blog, people can get the books they want for free. Download Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas PDF Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas available in formats PDF, Kindle, ePub, iTunes and Mobi also.

Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas PDF. Are you ready to see your fixer upper These famous words are now synonymous with the dynamic. Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas PDF Kindle is the first book from Read Online Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas PDF by., Download Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas PDF File, Free to Read Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas Online Ebook. Download Best Book Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas ePub, Download pdf Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas, Download Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas Online Free, pdf Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas read online.

Full Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas PDF Free. Read E-Books online Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas, Download ebook Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas Online, Download Best Book Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas PDF Download Free. DownloadCraquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas PDF Mobi BY. Together we are building a special place where you can read, learn and explore. TORRENT download.

Download Craquelado Y Decoupage: Tecnicas Para Imitar Superficies Envejecidas PDF.

In September 2007, while leafing through his copy of IEEE Spectrum,, an engineer at the whose research focus is microwave electronics, came across a that left him baffled: “I was shocked by the number of cell phones that are discarded daily in the U.S. And that are still in working order—426,000 per day. That is a huge number, and as a researcher, I was concerned,” he says. And rightly so; each cell phone contains chips made of poisonous gallium arsenide (GaAs). In the 26 May issue of Nature Communications, Ma and his colleague, materials scientist, plus collaborators at UW-Madison and the Madison-based (FPL) describing a technique for making biodegradable semiconductor chips out of wood.

Aprende A Jugar Al Poker Con Los Pelayos Pdf Files

What’s more, they demonstrated that microwave transmitter and receiver chips made this way perform as well as their silicon or GaAs counterparts. “Actually, our work was inspired by the IEEE Spectrum article,” says Ma. Unlike the silicon, GaAS, and petroleum-based plastic substrates that are used in electronics—and are not biodgradable—the substrate for Ma and company’s 'green' chips is made of a type of paper.

But unlike paper, which typically consists of wood fibers 10 micrometers thick and bigger—making it rough and fairly easy to tear, they used much smaller fibers. “If you chop down the wood into nanosize fibers, you find that the fibers are single crystals. If you put this material together to make a substrate, it becomes very strong—stronger than the paper we use,” says Ma. “It also becomes transparent and has low RF energy loss,' he adds. The cellulose nanofibril (CNF) “paper” they used is about 200 micrometers thick. Although the researchers coated it with a thin epoxy layer to protect it from moisture, this does not affect its biodegradability. 'If we put it in a fungus environment, the fungus can still eat it,' says the Wisconsin researcher. To create the green chips, the researchers started out with silicon or GaAs devices sitting atop substrates made of the same material.

Then they released the circuits from their original substrates and them onto the nanofibril substrates. Using this technique, the researchers created several microwave GaAs devices, such as arrays of GaInP/GaAs heterojunction bipolar transistors, as well as circuits containing capacitors, RF inductors and Schottky diodes. The performance of these flexible devices is exactly the same as that of rigid circuits, reports Ma. The group also demonstrated several silicon-based digital logic circuits on paper substrates. However, these substrates may have a wider range of applications. Nanofibril films may be used in photovoltaic cells and also in displays because they have better light-transmission properties than glass, says Ma.

Using paper substrates would allow a reduction in the amount of GaAs used in chips by a factor of 3000, which would make chips conform to the pollution standards for arsenic set by the U.S.. Additionally, this technique would help cut costs, reducing the amount of expensive materials, such as gallium arsenide and highly purified silicon, that are packed into electronic gadgets. 'What we are looking at are future applications,” says Ma.

The paper includes a comparing today's production of rigid electronics with the projected flexible electronics production. The volume of flexible electronics is expected to largely exceed rigid electronics. Learn More: • • • • • • • Link. Why the Millennials Are the Most Important Generation Yet By Millennials are those born between 1980-2000, today between the ages of 15-35. This post is about millennials – why they are changing the game, how to hire them, and how to keep them motivated. The data presented below comes from Mary Meeker’s “Internet Trends Report” – one of the reports I look forward to each year. Kudos to Mary and Kleiner Perkins for this awesome data.

Aprende A Jugar Al Poker Con Los Pelayos Pdf Files

This is my analysis of what it all means. Millennials Are Changing the Game No matter what Internet-related business you’re in, millennials are your most important demographic. Understanding how they think is critical. It’s an understatement to say that the world they’ve grown up in is dramatically different than Gen X (born 1965 – 1980) and Baby Boomers (born 1946 – 1964). This year, they became the largest generation in the workforce. And I would posit that their workforce is still largely misunderstood – and immensely undervalued. I personally have a team of 5 millennials that is doing amazing things – they are more flexible, motivated, creative, and hard-working than most. If you want to tap into the millennial talent pool and keep them on your team, you have to adapt to their new modes of thinking.

Millennials’ Values Are Changing A cohort of 4,000 graduates under the age of 31, from around the world, were asked the question: which three benefits would you most value from an employer? The top three responses, by healthy margins, might not be what you’d expect: • Training and Development – they want to learn • Flexible Hours – they want to be spontaneous, they want to feel “free” • Cash Bonuses – they want to have “upside” in the value they are creating Empowered by a world connected by technology, millennials have new tools and capabilities at their disposal. Many of the tasks we had to do at work have been digitized, dematerialized, demonetized, and democratized – and the people in this generation know how to leverage these exponential tools to do things faster, better, and more effectively than their predecessors. As such, they crave flexibility. They expect to be mobile and work from home/office/cafes/etc at their will. As the Meeker report outlines: • 32% believe they will be working ‘mainly flexible hours’ in the future. • 38% are freelancing, versus 32% of those over the age of 35.

• ~20% identify as ‘night owls,’ and often prefer to work outside of normal business hours. • 34% prefer to collaborate online at work, as opposed to in-person or via phone.

• 45% use personal smartphones for work purposes (vs. 18% for older generations). • 41% are likely to download applications to use for work purposes in the next 12 months and use their own money to pay for them.

PDF File: Aprende a jugar al poquer en internet con los Pelayos Descargar Aprende a jugar al poquer en internet con los Pelayos PDF EPUB MOBI.

Millennials Live in an “On-Demand” World As I’ve mentioned in a previous post, this year, the “on-demand” economy (think companies like Uber, AirBnb, Instacart, etc.) has exploded. According to venture capital research firm CB Insights, funding for on-demand companies jumped 514 percent last year to $4.12 billion. New investments in early 2015 have totaled at least $3.78 billion. And, as it also turns out, millennials make up the largest cohort of “on-demand” workers. This isn’t a coincidence – it is largely reflective of their different mindsets. Getting things “on-demand” – what they want, when they want, where they want, how they want – is indicative of their priorities. Look at the chart below: Hiring managers ranked qualities each generation is more likely to possess. The results: Millennials are significantly more narcissistic (more on this later), open to change, creative, money driven, adaptable, and entrepreneurial than other generations.

And there is a huge disconnect There is a perception disconnect between managers and millennials – and it is making it difficult for companies with “older” cultures to attract and retain the best talent out there. The Career Advisory board did another study to compare the difference between managers’ and millennials’ views of the most important factors that indicate career success to millennials. Most managers (48%) thought that MONEY was the most important thing to millennials.

What did the millennials want most? MEANINGFUL WORK. This is consistent with my experience with the many millennial entrepreneurs and colleagues I work with, advise, invest in and support.

Here are a few tips I’ve found useful in how to hire and retain great millennials. How to Hire and Retain Millennials • Give them the freedom/autonomy to work the way they want to work. In my mind (and this depends largely on the job/company), if the millennials on my team have a laptop and an Internet connection, they can be working. Some of them work best at 11 p.m. Some of them want to work and travel at the same time (telepresence robots like the BEAM make working remotely a breeze, and VR will make it even easier down the line). The notion of a 9-to-5 workweek isn’t attractive to them. Instead, be clear about milestones and deadlines and let your team accomplish them as they see fit.

• Have a massively transformative purpose (MTP). Millennials are mission-driven. The brightest, most hard-working of them want to change the world. You need to think 10x bigger and catalyze innovation in your organization by finding a massively transformative purpose that your team can rally around.

Think about Elon Musk’s MTP: to go to Mars and make humanity an interplanetary species. Or Google’s: to organize the world’s information. Millennials will flock to you if you have a compelling MTP and if your organization isn’t afraid to take moonshots.

• Align the incentives. If millennials have “upside” in the value that they create, they are going to work harder, faster and better than if a) they don’t have upside, or b) their upside isn’t clear. The game these days is all about incentives. Profit-sharing, prizes, status, gamification and friendly competition are all highly motivating to this group. Leverage these strategies to get the best work out of your team.

My goal is to give them extraordinary upside based on their extraordinary results. • Challenge them. Millennials love a good challenge. You saw in the results above that they are more narcissistic and perhaps egotistical than previous generations. Use this in your favor.

Give them the authority and autonomy to challenge you. Let them prove why their particular solution is better than yours. They are also more creative and entrepreneurial than past generations, so you might be, in the very least, surprised by the results you get. • Encourage them to experiment with exponential technologies. If they think they can optimize a process by using a new tech platform, say yes! Encourage them to leverage crowdsourcing, crowdfunding, machine learning/data mining, robotics/telepresence, VR/AR, etc.

All of these experiments, if they work, will make your business more scalable, less expensive, and more fun. And much, much more. The proof is in the pudding. This most excellent blog was drafted by on my team (a superstar at age 24) at 10 p.m. On a Saturday night, passed to me to edit, then to my other rockstars and at 11 p.m.

For a final edit and to get out to you. I love my millennial team for their brilliance and dedication.

[ image courtesy of Shutterstock; charts courtesy of KPCB]. End Small Thinking about Big Data It is time to end small thinking about big data. Instead of thinking about how to apply the insights of big data to business problems, we often hear more tactical questions, such as how to store large amounts of data or analyze it in new ways. This thinking is small because it focuses on technology and new forms of data in an isolated and abstract way. Using Big Data Energy to Start a Movement. We must remember that big data isn't about technology; it is a movement, a mind-set, that's ingrained in an organization.

How can we channel the energy that surrounds big data into a cultural transformation? Movements don't succeed without a compelling vision of the future. The goal should be to create a data culture, to build on what we've done in the past, to get everyone involved with data, and to derive more value and analytics from all the data to make business decisions. This is the real victory.

Please read the attached brief. By This slide-based report provides the most comprehensive research available on the U.S. Community solar market. It defines and segments the market, forecasts installations in total and by state, outlines the legislature that is helping and hampering community solar, and provides a snapshot of today’s competitive landscape. The solar power generated in urban areas is poised to make enormous strides in capacity, new industry research concludes.

Greentech Media, or GTM, Research expects community solar installations to grow fivefold in 2015 to 115 megawatts of capacity. By 2020, GTM researchers expect installations to grow to 500 megawatts the firm concludes in ' At this rate of growth in installations, community solar will make up 1.8 gigawatts of capacity in five years. Researchers found that 24 states currently have at least one operational community solar project. Four states, however – California, Colorado, Massachusetts and Minnesota – represent 80 percent of the community solar installations expected over the next two years. Another 20 states have enacted or are considering enabling legislation for community-based solar projects. 'Looking ahead to 2020, the community solar opportunity is poised to become more geographically diversified, as developers ramp up service offerings to utilities in states without community solar legislation in place and as national rooftop solar companies enter the community solar scene,' GTM Research analyst Cory Honeyman said. The report identifies 29 companies that are active in installing community solar projects.

For more: • download the report • read the Clean Technica story Related Articles: • • • • Link. The utility industry is facing a number of challenges and trends related to enterprise asset information management. Learn how OpenText, SAP and Microsoft best practices help organizations like Anglian Water maximize utility asset management and performance. In this collaborative paper, OpenText, Microsoft and PennEnergy look at key utility industry challenges and trends related to enterprise asset information management. The paper considers the types of solutions needed to address these issues as well as the specific solutions OpenText and Microsoft bring to the table.

It also highlights a particular, successful solution implemented by water and wastewater utility Anglian Water. 3 BIG DATA SECURITY ANALYTICS TECHNIQUES YOU CAN APPLY NOW TO CATCH ADVANCED PERSISTENT THREATS By Randy Franklin Smith and Brook Watson Commissioned by HP In this unprecedented period of advanced persistent threats (APTs), organizations must take advantage of new technologies to protect themselves. Detecting APTs is complex because unlike intensive, overt attacks, APTs tend to follow a “low and slow” attack profile that is very difficult to distinguish from normal, legitimate activity—truly a matter of looking for the proverbial needle in a haystack. The volume of data that must be analyzed is overwhelming.

One technology that holds promise for detecting these nearly invisible APTs is Big Data Security Analytics (BDSA). In this technical paper, I will demonstrate three ways that the BDSA capabilities of HP ArcSight can help to fight APTs: • 1. Detecting account abuse by insiders and APTs • 2.

Pinpointing data exfiltration by APTs • 3. Alerting you of new program execution Please read the attached whitepapers. Quantum computers: A little bit, better After decades languishing in the laboratory, quantum computers are attracting commercial interest A COMPUTER proceeds one step at a time. At any particular moment, each of its bits—the binary digits it adds and subtracts to arrive at its conclusions—has a single, definite value: zero or one.

At that moment the machine is in just one state, a particular mixture of zeros and ones. It can therefore perform only one calculation next. This puts a limit on its power. To increase that power, you have to make it work faster.

But bits do not exist in the abstract. Each depends for its reality on the physical state of part of the computer’s processor or memory. And physical states, at the quantum level, are not as clear-cut as classical physics pretends. That leaves engineers a bit of wriggle room. By exploiting certain quantum effects they can create bits, known as qubits, that do not have a definite value, thus overcoming classical computing’s limits. Around the world, small bands of such engineers have been working on this approach for decades.

Using two particular quantum phenomena, called superposition and entanglement, they have created qubits and linked them together to make prototype machines that exist in many states simultaneously. Such quantum computers do not require an increase in speed for their power to increase. In principle, this could allow them to become far more powerful than any classical machine—and it now looks as if principle will soon be turned into practice. Big firms, such as Google, Hewlett-Packard, IBM and Microsoft, are looking at how quantum computers might be commercialised. The world of quantum computation is almost here.

A Shor thing As with a classical bit, the term qubit is used, slightly confusingly, to refer both to the mathematical value recorded and the element of the computer doing the recording. Quantum uncertainty means that, until it is examined, the value of a qubit can be described only in terms of probability. Its possible states, zero and one, are, in the jargon, superposed—meaning that to some degree the qubit is in one of these states, and to some degree it is in the other. Those superposed probabilities can, moreover, rise and fall with time.

The other pertinent phenomenon, entanglement, is caused because qubits can, if set up carefully so that energy flows between them unimpeded, mix their probabilities with one another. Achieving this is tricky. The process of entanglement is easily disrupted by such things as heat-induced vibration.

As a result, some quantum computers have to work at temperatures close to absolute zero. If entanglement can be achieved, though, the result is a device that, at a given instant, is in all of the possible states permitted by its qubits’ probability mixtures. Entanglement also means that to operate on any one of the entangled qubits is to operate on all of them. It is these two things which give quantum computers their power.

Harnessing that power is, nevertheless, hard. Quantum computers require special algorithms to exploit their special characteristics. Such algorithms break problems into parts that, as they are run through the ensemble of qubits, sum up the various probabilities of each qubit’s value to arrive at the most likely answer. One example—Shor’s algorithm, invented by Peter Shor of the Massachusetts Institute of Technology—can factorise any non-prime number. Factorising large numbers stumps classical computers and, since most modern cryptography relies on such factorisations being difficult, there are a lot of worried security experts out there.

Cryptography, however, is only the beginning. Each of the firms looking at quantum computers has teams of mathematicians searching for other things that lend themselves to quantum analysis, and crafting algorithms to carry them out. Top of the list is simulating physics accurately at the atomic level. Such simulation could speed up the development of drugs, and also improve important bits of industrial chemistry, such as the energy-greedy Haber process by which ammonia is synthesised for use in much of the world’s fertiliser. Better understanding of atoms might lead, too, to better ways of desalinating seawater or sucking carbon dioxide from the atmosphere in order to curb climate change. It may even result in a better understanding of superconductivity, permitting the invention of a superconductor that works at room temperature.

That would allow electricity to be transported without losses. Quantum computers are not better than classical ones at everything. They will not, for example, download web pages any faster or improve the graphics of computer games. But they would be able to handle problems of image and speech recognition, and real-time language translation. They should also be well suited to the challenges of the big-data era, neatly extracting wisdom from the screeds of messy information generated by sensors, medical records and stockmarkets. For the firm that makes one, riches await. Cue bits How best to do so is a matter of intense debate.

The biggest question is what the qubits themselves should be made from. A qubit needs a physical system with two opposite quantum states, such as the direction of spin of an electron orbiting an atomic nucleus. Several things which can do the job exist, and each has its fans.

Some suggest nitrogen atoms trapped in the crystal lattices of diamonds. Calcium ions held in the grip of magnetic fields are another favourite.

So are the photons of which light is composed (in this case the qubit would be stored in the plane of polarisation). And quasiparticles, which are vibrations in matter that behave like real subatomic particles, also have a following.

The leading candidate at the moment, though, is to use a superconductor in which the qubit is either the direction of a circulating current, or the presence or absence of an electric charge. Both Google and IBM are banking on this approach. It has the advantage that superconducting qubits can be arranged on semiconductor chips of the sort used in existing computers. That, the two firms think, should make them easier to commercialise. Those who back photon qubits argue that their runner will be easy to commercialise, too.

As one of their number, Jeremy O’Brien of Bristol University, in England, observes, the computer industry is making more and more use of photons rather than electrons in its conventional products. Quantum computing can take advantage of that—a fact that has not escaped Hewlett-Packard, which is already expert in shuttling data encoded in light between data centres. The firm once had a research programme looking into qubits of the nitrogen-in-diamond variety, but its researchers found bringing the technology to commercial scale tricky. Now Ray Beausoleil, one of HP’s fellows, is working closely with Dr O’Brien and others to see if photonics is the way forward. For its part, Microsoft is backing a more speculative approach.

This is spearheaded by Michael Freedman, a famed mathematician (he is a recipient of the Fields medal, which is regarded by mathematicians with the same awe that a Nobel prize evokes among scientists). Dr Freedman aims to use ideas from topology—a description of how the world is folded up in space and time—to crack the problem. Quasiparticles called anyons, which move in only two dimensions, would act as his qubits. His difficulty is that no usable anyon has yet been confirmed to exist. But laboratory results suggesting one has been spotted have given him hope. And Dr Freedman believes the superconducting approach may be hamstrung by the need to correct errors—errors a topological quantum computer would be inherently immune to, because its qubits are shielded from jostling by the way space is folded up around them.

For non-anyonic approaches, correcting errors is indeed a serious problem. Tapping into a qubit prematurely, to check that all is in order, will destroy the superposition on which the whole system relies.

There are, however, ways around this. In March John Martinis, a renowned quantum physicist whom Google headhunted last year, reported a device of nine qubits that contained four which can be interrogated without disrupting the other five. That is enough to reveal what is going on. The prototype successfully detected bit-flip errors, one of the two kinds of snafu that can scupper a calculation. And in April, a team at IBM reported a four-qubit version that can catch both those and the other sort, phase-flip errors. Google is also collaborating with D-Wave of Vancouver, Canada, which sells what it calls quantum annealers.

The field’s practitioners took much convincing that these devices really do exploit the quantum advantage, and in any case they are limited to a narrower set of problems—such as searching for images similar to a reference image. But such searches are just the type of application of interest to Google. In 2013, in collaboration with NASA and USRA, a research consortium, the firm bought a D-Wave machine in order to put it through its paces. Hartmut Neven, director of engineering at Google Research, is guarded about what his team has found, but he believes D-Wave’s approach is best suited to calculations involving fewer qubits, while Dr Martinis and his colleagues build devices with more. Which technology will win the race is anybody’s guess.

But preparations are already being made for its arrival—particularly in the light of Shor’s algorithm. Spooky action Documents released by Edward Snowden, a whistleblower, revealed that the Penetrating Hard Targets programme of America’s National Security Agency was actively researching “if, and how, a cryptologically useful quantum computer can be built”. In May IARPA, the American government’s intelligence-research arm, issued a call for partners in its Logical Qubits programme, to make robust, error-free qubits.

In April, meanwhile, Tanja Lange and Daniel Bernstein of Eindhoven University of Technology, in the Netherlands, announced PQCRYPTO, a programme to advance and standardise “post-quantum cryptography”. They are concerned that encrypted communications captured now could be subjected to quantum cracking in the future. That means strong pre-emptive encryption is needed immediately.

Quantum-proof cryptomaths does already exist. But it is clunky and so eats up computing power.

PQCRYPTO’s objective is to invent forms of encryption that sidestep the maths at which quantum computers excel while retaining that mathematics’ slimmed-down computational elegance. Ready or not, then, quantum computing is coming. It will start, as classical computing did, with clunky machines run in specialist facilities by teams of trained technicians. Ingenuity being what it is, though, it will surely spread beyond such experts’ grip.

Quantum desktops, let alone tablets, are, no doubt, a long way away. But, in a neat circle of cause and effect, if quantum computing really can help create a room-temperature superconductor, such machines may yet come into existence. By weaving conductive metal threads into fabric, Google says they can transform your pants into a touchpad. Or your shirt. Or anything made of fabric. Car seats, sofas, chairs, hideous Christmas sweatersunderwear? Mini electronics control a swatch of conductive fabric.

The conductive cloth is, according to Google, 'indistinguishable' from regular fabric, comfortable, controlled by a chip the size of a jacket button, senses touch like a smartphone screen (although it isn’t quite as sensitive), and can even infer gestures using machine learning algorithms. These can be communicated to external devices wirelessly. It's probably too early to speculate on how much an item of clothing like this might cost. But Google calls the necessary components 'cost-efficient' and the fabric—including materials like silk, cotton, and polyester—can be produced on the standard machines we use for textiles today. This is important. Conductive fabrics aren't new, but reliably making them at scale is a big next step.

The tech isn’t, but you’d still need to charge you pants every few days. More self-sufficient iterations might harvest power from motion.

You’d also want them to be selective—that is, clothing is consistently touching your body or brushing objects as you walk around. Filtering deliberate from random touches and gestures will be key.

Apart from broadly noting that 'connected clothes' can be used to interact with 'services, devices, and environments,' Google isn't planning to sort out exact applications. That, they say, will be up to developers and inventors. Instead, they’ll focus on perfecting the tech (in partnership with Levi’s) and providing a launchpad for the creativity of others. A screen displays touches registered on a section of conductive fabric. “From a fashion designer’s perspective and wearer’s perspective, we want this to be as flexible as the stuff you already wear,”, design lead on Project Jacquard. “As opposed to electronics, which have a very fixed form factor and very fixed functionality.” Design first, adaptable functionality.

Google's smart clothes are one example of a broadly connected future. We’re integrating computing into everything else too. How will we control a smart house or office? Smartphones and tablets are likely a piece of the puzzle. And indoor tracking systems, akin to Kinect (but much more precise), will be another one. Meanwhile, conductive materials—fabrics, inks, and paints—might allow us to embed controls on any surface. Perhaps certain dials, nobs, switches, and handles will be gradually replaced by conductive surfaces.

We like this direction because it hints at a much more seamless integration of technology. In recent years, computing has been awkwardly shuffling into our personal space. Fitness bands, smartwatches, Google Glass. Ultimately, people want this stuff to work, yes, but mostly to get out of the way. And that’s where we’re going. Altogether, these technologies might realize a future that looks, outwardly, a lot like the present—but where seemingly ordinary, everyday stuff can do extraordinary things.

'If you can weave the sensor into the textile, as a material,” says Google's Ivan Poupyrev, “you’re moving away from the electronics. You’re making the basic materials of the world around us interactive.' Image Credit: Google RELATED TOPICS: • • • • • • • • • • Link: http://singularityhub.com/2015/05/31/forget-google-glass-its-all-about-google-smart-clothes/.

Like routers, most USB modems also vulnerable to drive-by hacking By The majority of 3G and 4G USB modems offered by mobile operators to their customers have vulnerabilities in their Web-based management interfaces that could be exploited remotely when users visit compromised websites. The flaws could allow attackers to steal or manipulate text messages, contacts, Wi-Fi settings or the DNS (Domain Name System) configuration of affected modems, but also to execute arbitrary commands on their underlying operating systems. In some cases, the devices can be turned into malware delivery platforms, infecting any computers they're plugged into. Russian security researchers Timur Yunusov and Kirill Nesterov presented some of the flaws and attacks that can be used against USB modems Thursday at the Hack in the Box security conference in Amsterdam.

USB modems are actually small computers, typically running Linux or Android-based operating systems, with their own storage and Wi-Fi capability. They also have a baseband radio processor that's used to access the mobile network using a SIM card.

Many modems have an embedded Web server that powers a Web-based dashboard where users can change settings, see the modem's status, send text messages and see the messages they receive. These dashboards are often customized or completely developed by the mobile operators themselves and are typically full of security holes, Yunusov and Nesterov said. The researchers claim to have found remote code execution vulnerabilities in the Web-based management interfaces of more than 90 percent of the modems they tested. These flaws could allow attackers to execute commands on the underlying operating systems.

These interfaces can only be accessed from the computers where the modems are being used, by calling their local area network IP address. However, attackers can still exploit any vulnerabilities remotely, through a technique called cross-site request forgery (CSRF). CSRF allows code running on a website to force a visitor's browser to make a request to another website. Therefore, users visiting a malicious Web page could unintentionally perform an action on a different website where they are authenticated, including on USB modem dashboards that are only accessible locally. Many websites have implemented protection against CSRF attacks, but the dashboards of USB modems typically have no such protection.

The researchers said that they've only seen anti-CSRF protection on some newer USB modems made by Huawei, but even in those cases, it was possible to bypass it using brute-force techniques. Home routers have the same problem and used CSRF to exploit vulnerabilities in more than 40 router models through users' browsers. The goal of the attack was to change the primary DNS servers used by the routers, allowing hackers to spoof legitimate websites or intercept traffic. Since USB modems act in a way that's similar to routers, providing an Internet gateway for computers, attackers can hijack their DNS settings too for a similar effect. In some cases it's also possible to get root shells on the modems or to replace their entire firmware with modified, malicious versions, the two researchers said. Attacks can go even deeper.

The researchers showed a video demonstration where they compromised a modem through a remote code execution flaw and then made it switch its device type from a network controller to a keyboard. They then used this functionality to type rogue commands on the host computer in order to install a bootkit -- a boot-level rootkit. Using CSRF is not the only way to remotely exploit some of the vulnerabilities in USB modem dashboards. In some cases the researchers found cross-site request scripting (XSS) flaws that could be exploited via SMS.

In a demonstration, they sent a specially crafted text message to a modem, that, when viewed by the user in the dashboard, triggered a command to reset the user's service password. The new password was sent by the mobile operator back via SMS, but the rogue code injected via XSS hid the new message in the dashboard and forwarded the password to the attackers. The researchers also mentioned other possible attacks, like locking the modem's SIM card by repeatedly entering the wrong PIN and then PUK code.

In an attempt to see how easy it would be for attackers to find vulnerable devices, the researchers set up a special modem fingerprinting script on the home page of a popular security portal in Russia. They claim to have identified over 5,000 USB modems in a week that were vulnerable to remote code execution, cross-site scripting and cross-site request forgery. — Romania Correspondent • Lucian Constantin writes about information security, privacy, and data protection for the IDG News Service. Slipped disks: Why preserving our digital history is hard Posted by The is over 500 hundred years old, yet people can still understand it today. You don't need any special tools or training. You don't even need to speak a particular language, though a smattering of Latin will help you comprehend some of the finer detail.

Evidence of writing dates back much further, with stone slabs and pottery showing written words from over 3,000 years ago. Graphical stories, in the shape of cave paintings, stretch into the depths of human history, going back more than 30,000 years. Ancient, but we can still understand what they depict. On my desk I have three 2.5-inch floppy disks from 1985. They contain the project write-up for my 'O'-level computer science course, written on a long-since-obsolete word-processor made by Hermes. They are completely inaccessible and unreadable today. Even eBay returns zero listings for that old machine.

One of the few photos I can find of it online is part-way down. No real loss, of course, but in terms of data permanence that's pathetic. Barely 30 years after the information was created and stored, it has effectively been lost forever. As human civilisation has advanced, so our information storage materials have become ever more rich, yet correspondingly ever more transient.

We can still read Shakespeare's plays and look at da Vinci's beautiful drawings, yet audio tapes rot, CD substrates corrode, movie film decays and digital formats change so fast they become obsolete within a human generation, if not sooner. This is a recognised problem with old media, but there's no easy solution. Broadcasting companies attempt to keep old footage in climate-controlled environments. Libraries do the same, as do art galleries, but that only slows the deterioration; it doesn't stop it. Audio-visual histories are even trickier to manage. I wrote earlier this year about. An admirable project, but the new storage medium will have to be upgraded regularly if it too isn't to become obsolete.

It's not as though we can easily return to the storage media of the past. A printer company executive once said, “If you value your photos, print them out.” He would say that, of course. But the longest-lasting inkjet inks and paper are rated at 100 years (and nobody's actually tested that in real life, for obvious reasons). Yet the oldest true photograph in existence is almost twice that age. Newer technology is more capable, but less robust.

Digital archivists today might choose JPEG and PDF for storing images and documents respectively, since, although not perfect, they are the most widely viewable and transportable formats. But that's only true today and for perhaps the past 10 years. What about the year 2030 or later? Will we still have JPEG viewing software and Acrobat Reader in 2050? Even if we do, will our antique SATA SSD drives hold their data that long, assuming cosmic rays, sunspot activity and other sources of strong electromagnetic radiation haven't wiped them clean?

Probably not, which means regular upgrades in the meantime, regular transitions from an old digital format to a new one. That means expense and time, especially given the sheer volume involved: that 90% of the world's digital data was generated in the past two years. All of this means we will only store what we want to store.

All around the world there are projects underway to store data for the long term. Just like the New Zealand one, these involve actively choosing which items will be retained. Think of the and similar initiatives; many countries have their own such storage projects. These all take an active approach to archiving. It's a subtle but important point.

Unlike most of history so far, deliberately storing information for retrieval means we're deciding what future historians can and can't recover from our era, and that determines what they can learn about us. Anything we don't explicitly store in a long-term format will be destroyed or rendered obsolete and unrecoverable - unknown. That wasn't the case before the digital age. Some storage projects, such as the Egyptian pyramids, were designed to last forever, but most information wasn't.

Yet a 200-year-old book left in an old cupboard doesn't need any special technology for you to read it today, and nor do old drawings and paintings. But if you want your photos, documents, audio-visual files and other records to outlast you – or even last as long as you – it will take an active, conscious effort to make that happen. As for more transient information such as social media posts, emails and instant messages, nobody's going to be digging those up in a time-capsule 50 years from now. So, much of our modern life is genuinely transient and becoming more so. It won't stand the test of time.

Our virtual memories will fade even faster than our biological ones. Does that matter?

Will our civilisation become like someone with no long-term memory, focused firmly in the present and short-termist? Has that already happened? We won't know the answers for many years, by which time we may have forgotten the questions. If you have valuable information in digital form, take some time to think about how you're going to maintain it. Otherwise, 30 years from now, you too may find yourself staring at obsolete technology containing information that you will never be able to retrieve.

Why 3D Printing a Jet Engine or Car Is Just the Beginning By The 3D printing (digital manufacturing) market has had a lot of hype over the past few years. Most recently, it seems this technology arena has entered the 'trough of disillusionment,' as 3D printing stock prices have taken a hit. But the fact remains: this exponential technology is still in its childhood and its potential for massive disruption (of manufacturing and supply chains) lies before us. This article is about 3D printing's vast potential — our ability to soon 3D print complex systems like jet engines, rocket engines, cars and even houses. But first, a few facts: • Today, we can 3D print in some 300 different materials, ranging from titanium to chocolate. • We can 3D print in full color.

• We can 3D print in mixed materials — imagine a single print that combines metals, plastics and rubbers. • Best of all, complexity and personalization come for free.

What Does It Mean for 'Complexity to Be Free'? Think about this: If you 3D print a solid block of titanium, or an equal-sized block with a thousand moving components inside, the time and cost of both 3D printings is almost exactly the same (the solid block is actually more expensive from a materials cost). Complexity and personalization in the 3D printing process come for free — i.e. No additional cost and no additional time. Today, we're finding we can 3D print things that you can't manufacture any other way. Let's take a look at some of the exciting things being 3D printed now. 3D Printing Rocket Engines SpaceX SuperDraco rocket engines. In 2014, SpaceX launched its Falcon 9 rocket with a 3D-printed Main Oxidizer Valve (MOV) body in one of the nine Merlin 1D engines (the print took less than two days —whereas a traditional castings process can take months).

Even more impressive, SpaceX is now 3D printing its SuperDraco engine chamber for the Dragon 2 capsule., the process 'resulted in an order of magnitude reduction in lead-time compared with traditional machining — the path from the initial concept to the first hotfire was just over three months.' On a similar note, Planetary Resources Inc.

(PRI) is demonstrating the 3D printing of integrated propulsion and structures of its ARKYD series of spacecraft. This technology has the potential to reduce the parts count by 100x, with an equal reduction in cost and labor.

3D Printing Jet Engines GE engineers recently designed, 3D printed, and fired up this simple jet engine. GE has the 3D printing of a complete, functioning jet engine (the size of a football), able to achieve 33,000 RPM. 3D printing has been used for decades to prototype parts — but now, with advances in laser technology, modeling and printing technology, GE has actually 3D printed a complete product. Xinhua Wu, a lead researcher at Australia's Monash University, recently explained the allure of 3D printed jet engines. Because of their complexity, she noted, manufacturing jet engine parts requires on the order of 6 to 24 months.

But 3D printing reduces manufacturing time to something more like one to two weeks. 'Simple or complex, 3D printing doesn't care,' she said.

'It produces [parts] in the same time.' 3D Printing Cars Last year, Jay Rogers from Local Motors built a 3D printed car. Local Motors 3D printed car. It's made of ABS plastic reinforced with carbon fiber. As they describe, 'Everything on the car that could be integrated into a single material piece has been printed. This includes the chassis/frame, exterior body, and some interior features. The mechanical components of the vehicle, like battery, motors, wiring, and suspension, are sourced from Renault's Twizy, an electric powered city car.' It is called 'The Strati,' costs $15,000, and gets 80 kilometers range on a single charge.

Today, the car takes 44 hours to print, but soon the team at Local Motors plans to cut the print process to less than 24 hours. In the past, producing a new car with a new design was very expensive and time consuming — especially when it comes to actually designing the tooling to handle the production of the newly designed car. With additive manufacturing, once you've designed the vehicle on a computer, you literally press *print*. 3D Printing Houses WinSun 3D printed house. In China, a company called WinSun Decoration Design Engineering 3D printed 10 full-sized houses in a single day last year. They used a quick-drying concrete mixture composed mostly of recycled construction and waste material and pulled it off at a cost of less than $5,000 per house. Instead of using, say, bricks and mortar, the system extrudes a mix of high-grade cement and glass fiber material and prints it, layer by layer. The printers are 105 feet by 33 feet each and can print almost any digital design that the clients request.

The process is environmentally friendly, fast and nearly labor-free. Manufacturing Is a $10 Trillion Business Ripe for Disruption We will continue to see advances in additive manufacturing dramatically changing how we produce the core infrastructure and machines that makes modern life possible. Image Credit:.

Does Artificial Intelligence Pose a Threat? By Ted Greenwald A panel of experts discusses the prospect of machines capable of autonomous reasoning Paging Sarah Connor!

After decades as a sci-fi staple, artificial intelligence has leapt into the mainstream. Between ’s Siri and ’s Alexa, ’s Watson and Brain, machines that understand the world and respond productively suddenly seem imminent. The combination of immense Internet-connected networks and machine-learning algorithms has yielded dramatic advances in machines’ ability to understand spoken and visual communications, capabilities that fall under the heading “narrow” artificial intelligence. Can machines capable of autonomous reasoning—so-called general AI—be far behind? And at that point, what’s to keep them from improving themselves until they have no need for humanity? The prospect has unleashed a wave of anxiety. “I think the development of full artificial intelligence could spell the end of the human race,” astrophysicist Stephen Hawking told the BBC.

Tesla founder Elon Musk called AI “our biggest existential threat.” Former Microsoft Chief Executive Bill Gates has voiced his agreement. How realistic are such concerns? And how urgent? We assembled a panel of experts from industry, research and policy-making to consider the dangers—if any—that lie ahead. Taking part in the discussion are Jaan Tallinn, a co-founder of Skype and the think tanks Centre for the Study of Existential Risk and the Future of Life Institute; Guruduth S.

Banavar, vice president of cognitive computing at IBM’s Thomas J. Watson Research Center; and Francesca Rossi, a professor of computer science at the University of Padua, a fellow at the Radcliffe Institute for Advanced Study at Harvard University and president of the International Joint Conferences on Artificial Intelligence, the main international gathering of researchers in AI. Here are edited excerpts from their conversation. What’s the risk? WSJ: Does AI pose a threat to humanity? BANAVAR: Fueled by science-fiction novels and movies, popular treatment of this topic far too often has created a false sense of conflict between humans and machines. “Intelligent machines” tend to be great at tasks that humans are not so good at, such as sifting through vast data.

Conversely, machines are pretty bad at things that humans are excellent at, such as common-sense reasoning, asking brilliant questions and thinking out of the box. The combination of human and machine, which we consider the foundation of cognitive computing, is truly revolutionizing how we solve complex problems in every field. AI-based systems are already making our lives better in so many ways: Consider automated stock-trading agents, aircraft autopilots, recommendation systems, industrial robots, fraud detectors and search engines. In the last five to 10 years, machine-learning algorithms and advanced computational infrastructure have enabled us to build many new applications. However, it’s important to realize that those algorithms can only go so far.

More complex symbolic systems are needed to achieve major progress—and that’s a tall order. Today’s neuroscience and cognitive science barely scratch the surface of human intelligence. My personal view is that the sensationalism and speculation around general-purpose, human-level machine intelligence is little more than good entertainment. TALLINN: Today’s AI is unlikely to pose a threat. Once we shift to discussing long-term effects of general AI (which, for practical purposes, we might define as AI that’s able to do strategy, science and AI development better than humans), we run into the superintelligence control problem. WSJ: What is the superintelligence control problem?

TALLINN: Even fully autonomous robots these days have off switches that allow humans to have ultimate control. However, the off switch only works because it is outside the domain of the robot. For instance, a chess computer is specific to the domain of chess rules, so it is unaware that its opponent can pull the plug to abort the game. However, if we consider superintelligent machines that can represent the state of the world in general and make predictions about the consequences of someone hitting their off switch, it might become very hard for humans to use that switch if the machine is programmed (either explicitly or implicitly) to prevent that from happening. WSJ: How serious could this problem be? TALLINN: It’s a purely theoretical problem at this stage. But it would be prudent to assume that a superintelligent AI would be constrained only by the laws of physics and the initial programming given to its early ancestor.

The initial programming is likely to be a function of our knowledge of physics—and we know that’s still incomplete! Should we find ourselves in a position where we need to specify to an AI, in program code, “Go on from here and build a great future for us,” we’d better be very certain we know how reality works. As to your question, it could be a serious problem. It is important to retain some control over the positions of atoms in our universe [and not inadvertently give control over them to an AI]. ROSSI: AI is already more “intelligent” than humans in narrow domains, some of which involve delicate decision making. Humanity is not threatened by them, but many people could be affected by their decisions.

Examples are autonomous online trading agents, health-diagnosis support systems and soon autonomous cars and weapons. We need to assess their potential dangers in the narrow domains where they will function and make them safe, friendly and aligned with human values.

This is not an easy task, since even humans are not rationally following their principles most of the time. Affecting everyday life WSJ: What potential dangers do you have in mind for narrow-domain AI? ROSSI: Consider automated trading systems. A bad decision in these systems may be (and has been) a financial disaster for many people. That will also be the case for self-driving cars. Some of their decisions will be critical and possibly affect lives. WSJ: Guru, how do you view the risks?

BANAVAR: Any discussion of risk has two sides: the risk of doing it and the risk of not doing it. We already know the practical risk today of decisions made with incomplete information by imperfect professionals—thousands of lives, billions of dollars and slow progress in critical fields like health care. Based on IBM’s experience with implementing Watson in multiple industries, I maintain that narrow-domain AI significantly mitigates these risks.

I will not venture into the domain of general AI, since it is anybody’s speculation. My personal opinion is that we repeatedly underestimate the complexity of implementing it. There simply are too many unknown unknowns. WSJ: What proactive steps is International Business Machines taking to mitigate risks arising from its AI technology? BANAVAR: Cognitive systems, like other modern computing systems, are built using cloud-computing infrastructure, algorithmic code and huge amounts of data. The behavior of these systems can be logged, tracked and audited for violations of policy.

These cognitive systems are not autonomous, so their code, data and infrastructure themselves need to be protected against attacks. People who access and update any of these components can be controlled. The data can be protected through strong encryption and its integrity managed through digital signatures. The algorithmic code can be protected using vulnerability scanning and other verification techniques. The infrastructure can be protected through isolation, intrusion protection and so on. These mechanisms are meant to support AI safety policies that emerge from a deeper analysis of the perceived risks. Such policies need to be identified by bodies like the SEC, FDA and more broadly NIST, which generally implement standards for safety and security in their respective domains.

WSJ: Watson is helping doctors with diagnoses. Can it be held responsible for a mistake that results in harm? BANAVAR: Watson doesn’t provide diagnoses. It digests huge amounts of medical data to provide insights and options to doctors in the context of specific cases. A doctor could consider those insights, as well as other factors, when evaluating treatment options. And the doctor can dig into the evidence supporting each of the options.

But, ultimately, the doctor makes the final diagnostic decision. ROSSI: Doctors make mistakes all the time, not because they are bad, but because they can’t possibly know everything there is to know about a disease. Systems like Watson will help them make fewer mistakes. TALLINN: I’ve heard about research into how doctors compare to automated statistical systems when it comes to diagnosis. The conclusion was that the doctors, at least on average, were worse. What’s more, when doctors second-guessed the system, they made the result worse. BANAVAR: On the whole, I believe it is beneficial to have more complete information from Watson.

I, for one, would personally prefer that anytime as a patient! The human impact WSJ: Some experts believe that AI is already taking jobs away from people. Do you agree? TALLINN: Technology has always had the tendency to make jobs obsolete.

I’m reminded of an Uber driver whose services I used a while ago. His seat was surrounded by numerous gadgets, and he demonstrated enthusiastically how he could dictate my destination address to a tablet and receive driving instructions. I pointed out to him that, in a few years, maybe the gadgets themselves would do the driving. To which he gleefully replied that then he could sit back and relax—leaving me to quietly shake my head in the back seat.

I do believe the main effect of self-driving cars will come not from their convenience but from the massive impact they will have on the job market. In the long run, we should think about how to organize society around something other than near-universal employment. BANAVAR: From time immemorial, we have built tools to help us do things we can’t do. Each generation of tools has made us rethink the nature and types of jobs. Productivity goes up, professions are redefined, new professions are created and some professions become obsolete. Cognitive systems, which can enhance and scale the capabilities of our minds, have the potential to be even more transformative.

The key question will be how to build institutions to quickly train professionals to exploit cognitive systems as their assistants. Once learned, these skills will make every individual a better professional, and this will set a new bar for the nature of expertise. WSJ: How should the AI community prepare? TALLINN: There is significant uncertainty about the time horizons and whether a general AI is possible at all. (Though, being a physicist, I don’t see anything in physics that would prevent it!) Crucially, though, the uncertainty does not excuse us from thinking about the control problem.

Proper research into this is just getting started and might take decades, because the problem appears very hard. ROSSI: I believe we can design narrowly intelligent AI machines in a way that most undesired effects are eliminated.

We need to align their values with ours and equip them with guiding principles and priorities, as well as conflict-resolution abilities that match ours. If we do that in narrowly intelligent machines, they will be the building blocks of general AI systems that will be safe enough to not threaten humanity. BANAVAR: In the early 1990s, when it became apparent the health-care industry would be computerized, patient-rights activists in multiple countries began a process that resulted in confidentiality regulations a decade later. As in other places, it is now technologically feasible to track HIPAA compliance, and it is possible to enforce the liability regulations for violations. Similarly, the serious question to ask in the context of narrow-domain AI is, what are the rights that could be violated, and what are the resulting liabilities? ROSSI: As we have safety checks that need to be passed by anybody who wants to sell a human-driven car, there will need to be new checks to be passed by self-driving cars.

Not only will the code running in such cars need to be carefully verified and validated, but we will also need to check that the decisions will be made according to ethical and moral principles that we would agree on. BANAVAR: What are the rights of drivers, passengers, and passersby in a world with self-driving cars? Is it a consumer’s right to limit the amount of information that can be exchanged between a financial adviser and her cognitive assistant? Who is liable for the advice—the financial adviser, the financial-services organization, the builder of the cognitive assistant or the curator of the data? These are as much questions about today’s world, [about how we regulate] autonomous individuals and groups with independent goals, as they are about a future world with machine intelligence.

Greenwald is a news editor for The Wall Street Journal in San Francisco. He can be reached. An often overlooked, but very important process in the development of any Internet-facing service is testing it for vulnerabilities, knowing if those vulnerabilities are actually exploitable in your particular environment and, lastly, knowing what the risks of those vulnerabilities are to your firm or product launch. These three different processes are known as a vulnerability assessment, penetration test and a risk analysis.

Knowing the difference is critical when hiring an outside firm to test the security of your infrastructure or a particular component of your network. Let’s examine the differences in depth and see how they complement each other. Vulnerability assessment Vulnerability assessments are most often confused with penetration tests and often used interchangeably, but they are worlds apart.

Vulnerability assessments are performed by using an off-the-shelf software package, such as Nessus or OpenVas to scan an IP address or range of IP addresses for known vulnerabilities. For example, the software has signatures for the Heartbleed bug or missing Apache web server patches and will alert if found. The software then produces a report that lists out found vulnerabilities and (depending on the software and options selected) will give an indication of the severity of the vulnerability and basic remediation steps.

It’s important to keep in mind that these scanners use a list of known vulnerabilities, meaning they are already known to the security community, hackers and the software vendors. There are vulnerabilities that are unknown to the public at large and these scanners will not find them.

Penetration test Many “professional penetration testers” will actually just run a vulnerability scan, package up the report in a nice, pretty bow and call it a day. Nope – this is only a first step in a penetration test. A good penetration tester takes the output of a network scan or a vulnerability assessment and takes it to 11 – they probe an open port and see what can be exploited. For example, let’s say a website is vulnerable to. Many websites still are.

It’s one thing to run a scan and say “you are vulnerable to Heartbleed” and a completely different thing to exploit the bug and discover the depth of the problem and find out exactly what type of information could be revealed if it was exploited. This is the main difference – the website or service is actually being penetrated, just like a hacker would do.

Similar to a vulnerability scan, the results are usually ranked by severity and exploitability with remediation steps provided. Penetration tests can be performed using automated tools, such as Metasploit, but veteran testers will write their own exploits from scratch. Risk analysis A risk analysis is often confused with the previous two terms, but it is also a very different animal. A risk analysis doesn't require any scanning tools or applications – it’s a discipline that analyzes a specific vulnerability (such as a line item from a penetration test) and attempts to ascertain the risk – including financial, reputational, business continuity, regulatory and others - to the company if the vulnerability were to be exploited.

Many factors are considered when performing a risk analysis: asset, vulnerability, threat and impact to the company. An example of this would be an analyst trying to find the risk to the company of a server that is vulnerable to Heartbleed.

The analyst would first look at the vulnerable server, where it is on the network infrastructure and the type of data it stores. A server sitting on an internal network without outside connectivity, storing no data but vulnerable to Heartbleed has a much different risk posture than a customer-facing web server that stores credit card data and is also vulnerable to Heartbleed. A vulnerability scan does not make these distinctions. Next, the analyst examines threats that are likely to exploit the vulnerability, such as organized crime or insiders, and builds a profile of capabilities, motivations and objectives. Last, the impact to the company is ascertained – specifically, what bad thing would happen to the firm if an organized crime ring exploited Heartbleed and acquired cardholder data?

A risk analysis, when completed, will have a final risk rating with mitigating controls that can further reduce the risk. Business managers can then take the risk statement and mitigating controls and decide whether or not to implement them. The three different concepts explained here are not exclusive of each other, but rather complement each other. In many information security programs, vulnerability assessments are the first step – they are used to perform wide sweeps of a network to find missing patches or misconfigured software.

From there, one can either perform a penetration test to see how exploitable the vulnerability is or a risk analysis to ascertain the cost/benefit of fixing the vulnerability. Of course, you don’t need either to perform a risk analysis. Risk can be determined anywhere a threat and an asset is present. It can be data center in a hurricane zone or confidential papers sitting in a wastebasket.

It’s important to know the difference – each are significant in their own way and have vastly different purposes and outcomes. Make sure any company you hire to perform these services also knows the difference. This article is published as part of the IDG Contributor Network. The Ultimate Interface: Your Brain By Ramez Naam is the author of 5 books, including the award-winning of sci-fi novels. Follow him on twitter:. A shorter version of this article first appeared.

The final frontier of digital technology is integrating into your own brain. DARPA wants to go there. Scientists want to go there. Entrepreneurs want to go there. And increasingly, it looks like it’s possible. You’ve probably read bits and pieces about brain implants and prostheses.

Let me give you the big picture. Neural implants could accomplish things no external interface could: Virtual and augmented reality with all 5 senses (or more); augmentation of human memory, attention, and learning speed; even multi-sense telepathy — sharing what we see, hear, touch, and even perhaps what we think and feel with others. Arkady flicked the virtual layer back on. Lightning sparkled around the dancers on stage again, electricity flashed from the DJ booth, silver waves crashed onto the beach. A wind that wasn’t real blew against his neck.

And up there, he could see the dragon flapping its wings, turning, coming around for another pass. He could feel the air move, just like he’d felt the heat of the dragon’s breath before. - Adapted from, book 2 of the.

It is and it’s not. Start with motion. In clinical trials today there are brain implants that have given men and women control of robot hands and fingers. DARPA has now used the same technology to. And in animals, the technology has been used in the opposite direction,.

Or consider vision. For more than a year now, we’ve had that restore vision via a chip implanted on the retina.

More radical technologies have. And recently,. (They’d do even better with implants in the brain.) Sound, we’ve been dealing with for decades, sending it into the nervous system through cochlear implants. Recently, children born deaf and without an auditory nerve have had.

Nor are our senses or motion the limit. In rats, we’ve. Human trials are starting this year. Now, you say your memory is just fine? Well, in rats,. And researchers can capture the neural trace of an experience, record it, and play it back any time they want later on.

Sounds useful. In monkeys, we’ve done better, using a brain implant to “” in pattern matching tests. Now, let me be clear. All of these systems, for lack of a better word, suck. They’re crude. They’re clunky.

They’re low resolution. That is, most fundamentally, because they have such low-bandwidth connections to the human brain. Your brain has roughly 100 billion neurons and 100 trillion neural connections, or synapses. An iPhone 6’s A8 chip has 2 billion transistors. (Though, let’s be clear, a transistor is not anywhere near the complexity of a single synapse in the brain.) The highest bandwidth neural interface ever placed into a human brain, on the other hand, had just 256 electrodes.

Most don’t even have that. The second barrier to brain interfaces is that getting even 256 channels in generally requires invasive brain surgery, with its costs, healing time, and the very real risk that something will go wrong. That’s a huge impediment, making neural interfaces only viable for people who have a huge amount to gain, such as those who’ve been paralyzed or suffered brain damage.

This is not yet the iPhone era of brain implants. We’re in the DOS era, if not even further back. What if, at some point, technology gives us high-bandwidth neural interfaces that can be easily implanted? Imagine the scope of software that could interface directly with your senses and all the functions of your mind: They gave Rangan a pointer to their catalog of thousands of brain-loaded Nexus apps.

Network games, augmented reality systems, photo and video and audio tools that tweaked data acquired from your eyes and ears, face recognizers, memory supplementers that gave you little bits of extra info when you looked at something or someone, sex apps (a huge library of those alone), virtual drugs that simulated just about everything he’d ever tried, sober-up apps, focus apps, multi-tasking apps, sleep apps, stim apps, even digital currencies that people had adapted to run exclusively inside the brain. - An excerpt from, book 3 of the. The implications of mature neurotechnology are sweeping. Neural interfaces could help tremendously with mental health and neurological disease. Pharmaceuticals enter the brain and then spread out randomly, hitting whatever receptor they work on all across your brain. Neural interfaces, by contrast, can stimulate just one area at a time, can be tuned in real-time, and can carry information out about what’s happening. We’ve already seen that deep brain stimulators can do amazing things for. The same technology is on trial for untreatable,, and.

And we know that stimulating the right centers in the brain can induce sleep or alertness, hunger or satiation, ease or stimulation, as quick as the flip of a switch. Or, if you’re running code, on a schedule.

(Siri: Put me to sleep until 7:30, high priority interruptions only. And let’s get hungry for lunch around noon. Turn down the sugar cravings, though.) Implants that help repair brain damage are also a gateway to devices that improve brain function.

Think about the “hippocampus chip” that repairs the ability of rats to learn. Building such a chip for humans is going to teach us an incredible amount about how human memory functions. And in doing so, we’re likely to gain the ability to improve human memory, to speed the rate at which people can learn things, even to save memories offline and relive them — just as we have for the rat.

That has huge societal implications. Boosting how fast people can learn would accelerate innovation and economic growth around the world. It’d also give humans a new tool to keep up with the job-destroying features of ever-smarter algorithms. The impact goes deeper than the personal, though. Computing technology started out as number crunching. These days the biggest impact it has on society is through communication.

If neural interfaces mature, we may well see the same. What if you could directly beam an image in your thoughts onto a computer screen? What if you could directly beam that to another human being? Or, across the internet, to any of the billions of human beings who might choose to tune into your mind-stream online? What if you could transmit not just images, sounds, and the like, but emotions? Intellectual concepts? All of that is likely to eventually be possible, given a high enough bandwidth connection to the brain.

That type of communication would have a huge impact on the pace of innovation, as scientists and engineers could work more fluidly together. And it’s just as likely to have a transformative effect on the public sphere, in the same way that email, blogs, and twitter have successively changed public discourse. Digitizing our thoughts may have some negative consequences, of course.

With our brains online, every concern about privacy, about hacking, about surveillance from the NSA or others, would all be magnified. If thoughts are truly digital, could the right hacker spy on your thoughts? Could law enforcement get a warrant to read your thoughts? Heck, in the current environment, would law enforcement (or the NSA) even need a warrant?

Could the right malicious actor even change your thoughts? “Focus,” Ilya snapped. “Can you erase her memories of tonight? Fuzz them out?” “Nothing subtle,” he replied. “Probably nothing very effective. And it might do some other damage along the way.” - An excerpt from, book 1 of the. The ultimate interface would bring the ultimate new set of vulnerabilities.

(Even if those scary scenarios don’t come true, could you imagine what spammers and advertisers would do with an interface to your neurons, if it were the least bit non-secure?) Everything good and bad about technology would be magnified by implanting it deep in brains. In I crash the good and bad views against each other, in a violent argument about whether such a technology should be legal. Is the risk of brain-hacking outweighed by the societal benefits of faster, deeper communication, and the ability to augment our own intelligence? For now, we’re a long way from facing such a choice. In fiction, I can turn the neural implant into a silvery vial of nano-particles that you swallow, and which then self-assemble into circuits in your brain.

In the real world, clunky electrodes implanted by brain surgery dominate, for now. That’s changing, though. Researchers across the world, many funded by DARPA, are working to radically improve the interface hardware, boosting the number of neurons it can connect to (and thus making it smoother, higher resolution, and more precise), and making it far easier to implant. They’ve shown recently that carbon nanotubes, a thousand times thinner than current electrodes,. They’re working on silk-substrate interfaces that.

Researchers at Berkeley have a proposal for that would be sprinkled across your brain (which sounds rather close to the technology I describe in ). And the former editor of the journal Neuron has pointed out that carbon nanotubes are so slender that a bundle of a million of them could be inserted into the blood stream and steered into the brain, giving us a nearly 10,000-fold increase in neural bandwidth, without any brain surgery at all. Even so, we’re a long way from having such a device. We don’t actually know how long it’ll take to make the breakthroughs in the hardware to boost precision and remove the need for highly invasive surgery.

Maybe it’ll take decades. Maybe it’ll take more than a century, and in that time, direct neural implants will be something that only those with a handicap or brain damage find worth the risk to reward. Or maybe the breakthroughs will come in the next ten or twenty years, and the world will change faster.

DARPA is certainly pushing and. Will we be ready? I, for one, am enthusiastic. There’ll be problems. Lots of them.

There’ll be policy and privacy and security and civil rights challenges. But just as we see today’s digital technology of Twitter and Facebook and camera-equipped mobile phones boosting freedom around the world, and boosting the ability of people to connect to one another, I think we’ll see much more positive than negative if we ever get to direct neural interfaces. In the meantime, I’ll about them. Just to get us ready.

Image Credit: /. By Whirlpool CIO Mike Heim is using cutting-edge tech to reinvent the lowly laundromat, but first he had to reinvent how his IT team worked. Welcome to the future of IoT.

The corner laundromat isn't the typical place for high-tech innovation. But don't tell that to Whirlpool CIO. When he looks at a laundromat, he sees Clothespin, a trademarked technology that connects commercial laundry machines through wireless cloud communications to smartphones and laundry equipment service providers. The Clothespin technology, developed by Heim and a cross-functional team of IT and business experts, also includes a (Qkr!) and a merchant account system (Simplify Commerce) from MasterCard. Among other functions, Clothespin allows people to use their smartphones to remotely check for available washers and dryers, pay with MasterCard or Visa rather than coins, add cycles remotely and receive notification when laundry cycles are done. On the operator side, Clothespin enables equipment service providers to remotely change prices based on demand, time of day and other market factors; track machine utilization; identify machines requiring maintenance; and provide users with promotions and. Developed in a five-day sprint last June, the project had Heim's tech people moving between e-payment processing and IT security and mobile app development and working with a variety of business functions and vendors.

'It required us as an IT function to partner differently,' he said. Welcome to the Internet of Things in the enterprise. Future of IoT is about virtualizing the physical world Heim's experience in delivering Clothespin illustrates what CIOs everywhere, in nearly every industry, will take on as organizations seek to, say industry experts. The Internet of Things, or IoT, is the next step in connectivity evolution.

It brings people, machines, data and organizations together in a large ecosystem -- as the Whirlpool example shows. Indeed, the future of IoT, one of the early steps in this continuum, where two or more devices were connected and/or tied to a back-end system via a purpose-built application but not integrated into an ecosystem beyond that. Mike Heim 'The Internet of Things is essentially this move to virtualizing the physical world; for businesses, it has an immense amount of potential for impact and disruption,' said Brian Partridge, vice president of research and consulting at 451 Research. CIOs and their IT teams play. At the very least, they will provide and support the burgeoning equipment (from sensors to analytics systems) that makes up IoT. However, successful CIOs must provide more than that infrastructure to help their organizations capitalize on this new environment, Partridge said. Just as IoT is reshaping connections everywhere, IoT is reshaping enterprise IT, requiring technologists to be more visionary, more collaborative and more business-minded than ever before.

'CIOs have been focused on providing tools. [The future of IoT] is potentially about reinventing the business, so CIOs need to work at a different level,' Partridge said. Seth Robinson, vice president of research and market intelligence for the nonprofit IT trade association CompTIA, agreed that imagination trumps technology smarts in. 'Being innovative and trying to figure out the use cases is one of the big challenges of IoT,' he said.

IoT's smorgasbord of tools, call for IT partnerships IoT has certainly gotten the attention of most IT organizations. Partridge pointed to research from his firm that shows that among the North American and European IT decision makers surveyed, 71% said they are planning for IoT. But only 8% said they're using IoT technology. What's ahead, though, is staggering. According to a 2013 Wikibon report, global investment in the industrial, a 2,400% increase from the $20 billion spent in 2012. And an Accenture report forecasts that the industrial IoT will lift real GDP by $10.6 trillion by 2030 in 20 of the world's top economies.

For many organizations across various industries, the future of IoT is now. Insurance companies that to make underwriting decisions are using the Internet of Things. So are medical device companies that design their products to feed data back to healthcare providers. Industrial companies that use wireless technologies, sensors and data streams to monitor their assembly lines or field-based equipment also are capitalizing on IoT.

Moreover, even organizations that aren't engaged in IoT-related projects yet are likely to have pieces of the enabling infrastructure in place. That's because or platform but a combination of many technologies, from sensors that collect data at various points in a process, to the analytics systems turning data into actionable information, to the network that carries all that data. 'From my perspective, every CIO needs to be thinking about this, at least at the planning and experimentation stage,' Partridge said. 'CIOs need to be thinking about this technology and they need to think about it in the broadest context: What are the business results we can drive with IoT?'

This is not, under any circumstances, a one-man job, Partridge and other thought leaders said. CIOs need to to formulate an IoT vision and strategy, and they also need to work across the organization's functional areas to understand where there's potential and how that potential can be turned into results. 'That's well beyond the IT realm; it's not just about skills and hardware,' Partridge underscored. That said, CIOs must still ensure they have the technology infrastructure in place to handle an organization's evolution into the Internet of Things, analysts said. CompTIA's Robinson said he sees four key areas CIOs and their tech teams need to address moving forward: • Hardware: all the sensors, devices,; • Software: to connect the hardware and perform analytics; • Yet-to-be-determined rules and regulations: 'Just like the Internet needs certain protocols,, too,' Robinson said; and • Services: Vendors will emerge to provide help to companies that either can't or won't take on IoT capabilities on their own, Robinson said. But to capitalize on IoT, CIOs need to think about transformation, he added. They need to look out five, 10, even 20 years down the road and figure out what the business will need; what new markets it can enter; and what products, offers or values the organization can make that it couldn't before.

'That creation and innovation is going to be really key,' Robinson said. 'Cloud and mobility really ushered in a new era of IT and ushered in other new areas, including the IoT. All these things have promised to change the world, and there's a little bit of truth behind that. But it will take some time to realize all that potential.'

CIO role in IoT: Prescription Is the CIO's role in capitalizing on the IoT wholly different from what CIOs have been doing? Accenture consultant says no.

CIOs have long experimented with, formulating ways to bring them into the organization. But IoT magnifies the need for these responsibilities. Converged and hyper-converged infrastructure make it easier to support VDI and desktop virtualization because they're built to install simply and run complex workloads. Converged infrastructure (CI) brings the four core aspects of a data center -- compute, storage, networking and server virtualization -- into a single chassis. Hyper-converged infrastructure (HCI) adds tighter integration between more components through software. With both CI and HCI, you know for sure that all the components are compatible with one another, and you can supply your shop with the necessary storage and networking that are so important to VDI in one fell swoop. This helps reduce the complexity of deploying VDI, an advancement that many shops looking to virtualize desktops would welcome.

As helpful and innovative as the technologies are, however, they also bring up some questions, such as what they do and how they differ. Well, it's time to put any confusion to rest. Let's sort through the features of converged vs., and identify the differences to understand what makes each one important to desktop virtualization administrators. What is converged infrastructure? Brings compute, storage, networking and server virtualization into a single chassis that you can manage centrally. This can include VDI management, depending on what configuration you buy and from which vendor.

The hardware you get in your CI bundle is pre-configured to run whatever workload you buy it for -- whether it's to support VDI, a database, a specific application or something else. But you don't have much flexibility to alter that configuration after the fact.

Regardless of how you build out a VDI environment, be aware that it is expensive and time consuming to scale up after the fact. Adding components separately becomes complex, taking away from many of CI's benefits. And adding desktops and capacity to in-house infrastructure can be just as expensive, which speaks to the importance of proper planning for any. The pieces of your CI bundle can also stand on their own, however.

A server you purchase in a CI bundle functions just fine without the other infrastructure components you bought with it, for example. What is hyper-converged infrastructure? Born from converged infrastructure and the idea of the (SDDC), hyper-converged infrastructure takes more aspects of a traditional data center and puts them in one box. It includes the same four aspects that come with converged infrastructure, but sometimes adds more components, including backup software, snapshot capabilities, data deduplication, inline compression, WAN optimization and more. CI is mainly hardware-focused, and the SDDC is usually hardware-agnostic; HCI combines these two aspects. HCI is also supported by one vendor and allows you to manage everything as a single system through a common toolset. To expand your infrastructure, you simply snap boxes of the resources you need, such as storage, to the base unit as you go.

Because HCI is software-defined -- which means the infrastructure operations are logically separated from the physical hardware -- the integration between components is much tighter than you see with CI, and the components have to stay together to function correctly. That makes, because you can change how the infrastructure is defined and configured at the software level and manipulate it to work for specialized applications or workloads that pre-configured CI bundles can't support. Hyper-converged infrastructure because it lets you scale up quickly without a ton of added expense. That's not the case in traditional VDI settings; shops either have to buy more resources than they need in anticipation of scaling up, or wait until virtual desktops eat up the allocated space and network, then add infrastructure after the fact. Both those situations can be expensive and time-consuming to resolve. But with HCI, if you need more storage, you can just snap that onto your stack. You can scale up in the time it takes for you to get another box, rather than going through an entire re-assessment and re-configuration of your in-house infrastructure.

Additionally, when you make the switch from physical PCs to virtual desktops, you still need something to do all the processing that laptops and desktops once did. Hyper-converged infrastructure helps with this because it often comes with a lot of flash, which is great for virtual desktop performance. It improves I/O, reduces the effects of boot storms, and lets you run virus and other scans in the background without users ever knowing.

The flexibility of hyper-converged infrastructure makes it more scalable and cost efficient than CI because you can add blocks of compute and storage as needed., but in the long term it can pay off. Postcards from the perimeter: Network security in the cloud and mobile era by Security professionals break old boundaries with new network edge protection strategies. With distributed workforces and mobile technologies, the network perimeter has evolved beyond the physical limits of most corporate campuses. The days when the perimeter was an actual boundary are a fond memory. Back then, firewalls did a decent job of protecting the network from outside threats, and intrusion prevention tools protected against insiders. But over time, the bad guys have gotten better: Spear phishing has made it easier to infiltrate malware, and poor password controls have made it easier to exfiltrate data. This means that the insiders are getting harder to detect, and IT assets are getting more distributed and harder to defend.

Complicating matters, today’s data centers are no longer on-premises. As cloud and mobile technologies become the norm, the notion of a network edge no longer makes much sense. New network security models are required to define what the network perimeter is and how it can be defended. CIOs and enterprise security managers are using different strategies to defend these “new” perimeters, as corporate data and applications travel on extended networks that are often fragmented. The borders between trusted internal infrastructure and external networks still exist, but the protection strategies and security policies around network applications, access control, identity and access management, and data security require new security models.

Here we look at four network edge-protection strategies in use today: protecting the applications layer, using encryption certificates, integrating single sign-on technologies and building Web front-ends to legacy apps. Provide application-layer protection. While have been around for some time, what’s new is how important their application awareness has become in defending the network edge. By focusing on the applications layer, enterprises can better keep track of potential security abuses because IT and security teams can quickly see who is using sensitive or restricted apps. One way to do this is to develop your own custom network access software that works with firewalls and intrusion detection systems. This is what Tony Maro did as the CIO for medical records management firm EvriChart Inc., in White Sulphur Springs, W.Va. “We have some custom firewall rules that only allow access to particular networks, based on the originating device.

So, an unregistered PC will get an IP address on a guest network with only outside Internet access and nothing else. Or, conversely, a PC with personal health information will get internal access but no Internet connection,” Maro says. “This allows for a lot more fine-grained control than simple vLANs.

We also monitor our DHCP leases and notify our help desk whenever a new device shows up on that list.” Another method is to incorporate real-time network traffic analysis. A number of vendors, including McAfee, Norse Corp., FireEye Inc., Cisco, Palo Alto Networks Inc. And Network Box Corp. Use this analysis as part of their firewall and other protective devices. Make proper use of encryption and digital certificates. A second strategy is to deploy encryption and digital certificates widely as a means to hide traffic, strengthen access controls and prevent man-in-the-middle attacks.

Some enterprises have come up with rather clever and inexpensive homegrown solutions, while others are making use of sophisticated network access control products such as MobileIAM from Extreme Networks Inc. That combine certificates with Radius directory servers to identify network endpoints. Extreme Networks’ dashboard has this interesting “fingerprint” display showing you the information it collects from each endpoint. “We use certificates for all of our access control because simple passwords are useless,” says Bob Matsuoka, the CTO of New York-based CityMaps.com. The company found it needed more protection than a user name and password combination to its Web servers, and providing certificates meant they could encrypt the traffic across the Internet as well as strengthen their authentication dialogs.

While this approach increases the complexity of Web application security for his developers and other end users, it also has been very solid. “Over the past three years we haven’t any problems,” Matsuoka says. One of the trade-offs is his company is still operating in startup mode.

“You can have too much security when you are part of a startup, because you risk being late to market or impeding your code development.” Several vendors of classic two-factor tokens such as Vasco Data Security Inc. And xAuthentify are also entering this market by developing better certificate management tools that can secure individual transactions within an application. This could be useful for financial institutions that want to offer better protection and yet not something that is intrusive to their customers.

Instead, these tools make use of native security inside the phone to sign particular encrypted data and create digital signatures of the transaction, all done transparently to the customer. To some extent, this is adding authentication to the actual application itself, which gets back to an application-layer protection strategy.

Use the cloud with single sign-on tools. As the number of passwords and various cloud-based applications proliferates, enterprises need better security than just re-using the same tired passphrases on all of their connections. One initiative that seems to be gaining is the use of a cloud-based single sign-on (SSO) tool to automate and protect user identities. Numerous enterprises are deploying these tools to create complex, and in some cases unknown, passwords for their users. SSO isn’t something new: We have had these products for more than a decade.

What is new is that several products combine both cloud-based software as a service logins with local desktop Windows logins, and add improved two-factor authentication and smoother federated identity integration. Also helping is a wider adoption of the open standard Security Assertion Markup Language, which allows for automated sign-ons via exchanging XML information between websites. As a result, SSO is finding its way into a number of different arenas to help boost security, including BYOD, network access control and mobile device management tools. Post Foods LLC in St. Louis, MO, is an adherent to SSO. The cereal maker uses Okta’s security identity management and SSO service. Most of their corporate applications are connected through the Okta sign-in portal.

Users are automatically provisioned on the service (they don’t have to even know their individual passwords), so they are logged in effortlessly, yet still securely. Brian Hofmeister, vice president of architecture and operations for parent company, Post Holdings, in St. Louis, says that the consumer goods company was able to offer the same collection of enterprise applications, across its entire corporation of diverse offerings quicker through the use of SSO and federated identities, and still keep the network secure.

Consider making legacy applications Web-based. A few years ago the American Red Cross was one of the more conservative IT shops around.

Most of its applications ran on its own mainframes or were installed on specially provisioned PCs that were under the thumb of the central IT organization based in Washington, D.C. But then people started to bring their own devices along to staff the Red Cross’ disaster response teams. The IT department started out trying to manage users’ mobile devices -- and standardize on them. But within two or three months, the IT staff found the mobile vendors came out with newer versions, making their recommendations obsolete. Like many IT shops, the Red Cross found that the emergency response teams would rather use their own devices, and these devices would always be of more recent vintage, anyway. In the end, they realized that they had to change the way they delivered their applications to make them accessible from the Internet and migrate their applications to become more browser-based. The Red Cross still has its mainframe apps, just a different way to get to them.

And their end users are happier because they don’t have to tote around ancient laptops and smartphones, too. By building a Web front-end to their mission-critical apps, the Red Cross was able to move security to inside the application itself and not depend on the physical device that was running the application. Web Wraps for Legacy Apps Connections are made over SSL encryption so that data transferred from device to their mainframes is protected. And their IT staff no longer has to worry about obsolete smartphones and can focus on building and “webifying” other applications. “You have to be able to adapt to the changing mobile environment,” says John Crary, CIO for the American Red Cross. “It is moving rapidly. Businesses are going toward being more mobile-centric, and we need to be much quicker and much more adaptable.” Certainly, breaking traditional boundaries with these four strategies isn’t the only way you can set up a more secure network edge.

But by tying network security more closely to applications, certificates and transactions, you have a better chance at stopping the bad guys. About the author: David Strom is a freelance writer and professional speaker based in St. He is former editor in chief of TomsHardware.com. Read more from Strom. Data science jobs not as plentiful as all the hype indicates by These days, it seems everyone is talking about data scientists.

General interest in the role continues to grow, but that isn't leading to corresponding growth in jobs among employers. Data scientist has famously been called ',' and there can be no doubt that there's a lot of interest in data scientists and the work they do. That isn't surprising: There's something almost magical about being able to predict future business scenarios or find useful facts where others see only rows and rows of raw data.

But despite all the hype, data science jobs aren't seeing especially high demand among employers. The fact is, there's more call these days for data engineers, a less sexy position that typically involves identifying and implementing data analysis tools and working with database teams to ensure that data is prepared for analysis. You couldn't tell that from Google: The number of searches on the term data scientist has shot up since 2012 and is continuing on a sharp upward trajectory (see Figure 1). By comparison, data engineer gets less than one-third as many searches. 1 But checking job listings on LinkedIn returns nearly three times as many results for ' as '.' That isn't a new trend.

Figures from job listing site Indeed.com show that since 2006, the percentage of listings on the site for data engineers has held relatively steady at around 1.5% of total job listings on the site, with the exception of a brief upsurge followed by a corresponding downward correction around 2012 (see Figure 2). 2 Job openings for data scientists are near their historic average today as well, but that's a much lower average, at barely above 0.15% of all listings. And the total number of data scientist positions listed is currently well below the most recent peak in 2013 (see Figure 3). 3 Skilled data scientists in short supply The disparity may be partly due to the fact that there are so few true data scientists available to hire.

The mix of analytics, statistics, modeling, data visualization and communication skills that data scientists are expected to have makes them something of the proverbial unicorn. If businesses have realized that they aren't likely to find suitable candidates, it would make sense that they aren't bothering to post listings for data science jobs. It could also be that companies just don't see that much value in. They generally command large salaries due to their mix of skills. Hiring a data engineer to fix the info plumbing and a team of business analysts trained in self-service software like Tableau or QlikView to ask questions and get answers might make more sense economically.

Businesses are, after all, pragmatic. Glassdoor.com, another job listing site, estimates the national average salary for data scientists to be $118,709. Data engineers make $95,936 on average, while data analysts take home $62,379. Combined, a data engineer and a data analyst may cost more, but they're typically tasked with more general responsibilities that can have more concrete business value, and they should be able to get a lot more done than a single data scientist. Data science jobs not so necessary? There's also the question of need. A lot of businesses don't have big, complex data-related questions answerable only by Ph.D.-level data scientists.

Many organizations' data problems are much smaller in scale and can be that don't require advanced statistical analysis skills or programming knowledge. None of this is meant to diminish what data scientists can add to an organization. For businesses that have valid needs, they can be game changers. But there has been so much hype about the role that corporate executives could be forgiven for thinking that hiring a data scientist is equivalent to employing a business magician. In many cases, the reality simply doesn't match the level of exuberance.

Data-savvy young people who are considering which way to take their skills may want to take note. If you want a sexy job, become a data scientist. But if you want more job opportunities, and perhaps more job security, becoming a data engineer might be a better career choice.

Ed Burns is site editor of SearchBusinessAnalytics. Email him at and follow him on Twitter:. Next Steps • Listen to a podcast on • Everything you need to know about • See if you have. How to market to millennials (it's not easy) By From left: Brad Haugen, Kyla Brennan, Dee Anna McPherson and Kaitlyn Cawley, editor-in-chief of Elite Daily, at Collision in Las Vegas. A group of young marketers offer advice about wooing technically savvy, digitally driven millennials, who -- despite the fact that many are drowning in student loans – are the target market du jour. LAS VEGAS -- In Las Vegas, if you're not new and fresh, you're old and boring.

Casinos thrive on youthful energy and young executives flush with cash. Nightclubs buzz into the wee hours, chock full of hard-partying millennials, not stodgy Gen-Xers in desperate need of their beauty sleep. Marketers, too, need to reach millennials, but are they doing enough?

Let the f-bombs fly and puppies roam At Collision, an edgy tech event in Las Vegas this week, a panel of young marketers talked about wooing technically savvy, digitally driven millennials. Attendees poured into the room to catch the panel, called Marketing to Millennials, and panelists didn't disappoint. Social buzzwords rolled off their tongues, and they peppered their speech with f-bombs. In fact, the panel echoed the event itself. Everything about Collision rings of youthful exuberance, from its Las Vegas venue of giant tents and concrete floors to the masses of casually dressed attendees to young speakers excitedly talking tech about instead of throwing up PowerPoint slides. There was even a couple of puppies on stage.

[ Related: ] All of this appeals to millennials, the target demographic of the moment despite the fact that many are drowning in student loans. Nearly every digital marketing trend, even in business-to-business, seems to be aimed at them. But marketing to millennials takes a special kind of approach. 'They're an amazing group to market to, because they will not accept mediocre marketing,' says panelist Dee Anna McPherson, vice president of marketing at Hootsuite. 'They have very, very high standards.

They're socially conscious. They want to engage with brands that reflect their values.

They like to co-create with you. They really keep you on your toes.' Don't sell to them, engage with them Millennials don't want to be sold to as much as engaged with. For marketers, this means the content they produce on social networks and native advertising should take a conversational tone, tell compelling stories, entertain and educate rather than push a marketing message. Sure, millennials have been labeled as selfish and entitled, but the opposite is true, say panelists.

[ Related: ] 'There is a degree of self-centeredness, I think, with the whole sort of social media and selfie era, but they're probably the most socially conscious group out there,' says panelist Kyla Brennan, founder and CEO at HelloSociety, a social media marketing and technology solutions company. 'They really care about what brands are doing. They respond to brands that do good and are transparent.' 'You can't bullsh-t them,' says panelist Brad Haugen, CMO at SB Projects, adding, 'They're really changing the conversation.'

Millennials know when a brand is trying to play them for fools. One of the recurring themes of the panel is that millennials can sniff out inauthenticity a mile away. For instance, many marketers make the mistake of trying to woo millennials by co-opting their slang words, such as 'on fleek' and 'twerking.' These misguided efforts can backfire, as millennials start conversations on social media that make brands look foolish.

[ Related: ] 'That's the f-cking worst,' Brennan says. 'Just don't do it.' Other brands are making the right moves. McPherson, for instance, likes online eyeglass retailer Warby Parker, which is running a 'buy one, give one' marketing campaign. The company doesn't scream its coolness message in consumers ears, rather the company lets consumers decide for themselves.

Consumers can also receive five 'try on' frames delivered to their homes, post pictures of themselves and get input from friends on which ones look best on them. 'They're killing it right now,' McPherson says. Haugen pointed to an older brand, Taco Bell, as doing a good job marketing to millennials. He says the fast-food restaurant has done a great job engaging consumers on social media, generally through humor. Haugen says many of his under-25 employees love the content Taco Bell delivers on both Twitter and Instagram.

Millennials are the flavor du jour, and they know it. They know what technology can deliver, and so they expect brands to give them a personalized, authentic customer experience.

Since they're in the driver's seat, they can also demand brands be socially conscious. If Taco Bell is any indicator, millennials want to laugh a little, too. Open Visual Communications Consortium A Path to Ubiquitous, Any-to-Any Video Communication Any Vendor. Over the last several years, great strides have been made to improve video communication capabilities in the industry.

Video over IP network technology has made video easier and faster to deploy. HD quality is now commonplace in video systems and clients.

Management and infrastructure solutions deployed in enterprises and organizations have enabled video networks to be established and custom dial plans implemented, enabling a rich set of visual communication experiences for users within those organizations. As a result, video adoption has increased across enterprises and organizations around the world. However, with growth have also come challenges. Those challenges have been most keenly experienced where enterprises or organizations have desired to have video communications across organizational boundaries. With voice and Internet traffic, one does not ponder how a network is connected because 'it just works' when one makes a call or accesses websites outside an end-user domain.

With video, the opposite has been true. Typically, end users only communicate via video within their own organization.

When communicating with outside parties, they often have to use awkward dial strings, and /or engage in manual planning and testing over the public Internet to have a video call. Even then a successful call can only be established if the IT departments of both companies have security or firewall policies that will allow the video call to take place to parties outside their organization. The customer may choose to use a managed or hosted video service provider to help facilitate that communication; however, this only moves the problem to the service provider, which goes through a manual process to plan, test, and validate that the desired far-end parties are reachable.

Both end users and service providers must deal with a wide variety of technical issues when establishing video between different organizations or different service providers. These issues include network connections, network quality of service (QoS), NAT/firewall traversal, security policies, various signaling protocols, inconsistent dial strings, security rules within each organization impacting video, and incompatabilities between video endpoints. In addition, there are the operational considerations around coordination of the different types management and scheduling systems and processes that exist within each Service Provider. Finally, the commercial considerations of termination and settlement between service providers must also be resolved. This combination of technical and business challenges has relegated video communication to a collection of isolated islands. It’s easy to communicate within an island, but almost impossible to communicate between islands.

The ability to resolve these issues and federate the islands doesn’t lie within the power of any one customer, one equipment manufacturer, one service provider, or even one standards body to solve. It requires a concerted effort of the industry driven by the needs of their end users. The Open Visual Communications Consortium (OVCC) has been formed to address these issues. The mission of the OVCC group is to establish high-quality, secure, consistent, and easy-to-use video communication between these video 'islands,' thereby enabling a dramatic increase in the value of video communication to end customers worldwide. This paper describes the OVCC organization, its purpose, and how it is addressing the B2B communications challenges and enabling businesses to open the door to faster decision-making, easier, more productive collaboration with partners and customers, streamlined supply chain management, and game-changing applications in education, healthcare, government and business. Please read the attached whitepaper.

Best Practices for Security Monitoring.You Can’t Monitor What You Can’t See Most security professionals focus on policy, training, tools, and technologies to address network security. Security tools and technologies, however, are only as good as the network data they receive for analysis.

With mounting Governance, Risk Management and Compliance (GRC) requirements, the need for network monitoring is intensifying. A new technology can help – the Network Monitoring Switch. It provides exactly the right data to each security tool and enables monitoring with dynamic adaptation. The Network Monitoring Switch resolves issues security teams have in getting visibility into the network and getting the right data for analysis. This whitepaper, targeted at security professionals, will address network visibility and will focus on: • • Monitoring inclusive of virtualized environments • Monitoring 10GE/40GE networks using existing 1GE/10GE security tools • Providing automated responses for adaptive monitoring • Improving incident remediation • Improving handling of sensitive data • Providing granular access control so the entire monitoring process is tightly controlled Please read the attached whitepaper. Watch Tiny Gecko Robots Haul Loads Up to 2,000 Times Their Own Weight By Biologically inspired gecko-bots?

They aren’t as rare as you might imagine. We’ve been, and they’ve existed since at least 2006. That said, the latest generation, out of Stanford, comes in more sizes—and they’re really, really strong. Two of the robots climb walls just like their namesake, but they can carry more than their own weight. One 9-gram robot can hoist nearly a kilogram. Indeed, it tows the original Gecko-inspired robot () up a wall in the video. Another bot, assembled with tweezers, drags a paper clip over ten times its weight. The most impressive (if less vertical) of the lot is the 12-gram ground robot that across a surface.

David Christenson, a Stanford engineer working on the robots, explains this is the equivalent of a human “pulling around a blue whale.” (.) What’s the secret to these robots’ super strength? They pull themselves up (or across) surfaces on feet covered in tiny rubber spikes that mimic the minuscule hairs covering a gecko’s footpads. When the bot places its foot, the spikes bend under the weight, increasing their surface area and stickiness. Once the weight is removed the, the reverse happens—they straighten and easily disengage. Gecko-inspired materials may make for tiny, super strong climbing robots,, or more mundanely they might simply allow us to more easily stick and unstick stuff.

University of Massachusetts researchers, for example,. An index card of Geckskin supports up to 700 pounds on smooth surfaces like glass. Unlike other strong adhesives, Geckskin is easily removed. And perhaps its most fascinating property isn’t its stickiness, but the fact that the material is cheap and low-tech—made of nylon, caulking, and carbon fiber or cotton. Whether it’s awesome miniature robots or mundane supermaterials, the lesson is pretty clear. Why reinvent the wheel when we can reverse engineer nature?

We've only begun to scratch the surface. Image Credit. Google Builds a Data Platform That's the Last Piece of Its Ad Empire Connects dots for marketers and challenges Facebook By Google's data management platform could be the ultimate answer to Facebook's people-based marketing.

Google is testing a new advertising product seen as the last piece it needs to complete its ad tech superstructure. The search giant is building a data management platform to help target ads and connect brands with people online more effectively, according to sources familiar with the plans. The platform is called Doubleclick Audience Center, one source with knowledge of the new product said.

(Doubleclick is the brand name of Google's suite of ad products for digital publishers and marketers.) 'What they're trying to offer the community is a one-stop shop ad stack all in Google,' said one digital marketing executive. Google has a demand-side platform for advertisers to buy digital ads; an ad exchange for publishers to sell ads; an attribution product to measure performance; and a dominant position in search and mobile. 'They just don't have a full service offering yet,' the executive said, adding that a data management platform changes that. The Audience Center will be available to advertisers using Doubleclick's ad exchange and third-party ad networks. Doubleclick advertisers also could continue to use outside data services when buying with Google. Google confirmed that it is working on the platform, but declined to provide details. 'We are testing data management capabilities in DoubleClick to help partners better manage their own data as well as that from third parties,' the company said in a statement.

In the past year, Facebook emerged as a formidable rival to Google, with thanks to its 1.4 billion users who log in on various devices with their true identities. The social network has the Atlas ad server, Facebook Audience Network, and tools for brands to target custom audiences, among other ad tech offerings. Industry players have been trying to emulate Facebook's ability to match data to the right users as they move from one device to another.

For instance, across devices and deliver relevant advertising. It's not as easy for Google as it is for Facebook to connect people's Web personas from device to device. Still, Google has two dominant properties that a data management platform could integrate—billions of users of its search engine and more than 1 billion Android users. It also has YouTube and Gmail, among other popular services. 'Obviously Google has troves of data. It's one of the things that makes them so successful,' an ad tech executive said, adding that the 'last mile' for its ad piping is to tie the information more accurately to Web users on desktop, mobile, tablet and even digital TV. A data management platform could be the key to helping advertisers do just that.

'Google knows who is searching for what, and now it knows how to get in touch with people directly,' a digital marketing executive said. 'You can see how incredibly valuable that is.' Of course, Google faces regulatory scrutiny for any move it makes, as well as talk of anti-competitive practices. In fact, the company was with behaving like a monopoly in search. The ad tech community has been concerned that Google is offering all the services that lock advertisers into its ecosystem and squeeze out rivals. 'Google has to wade carefully, because they are under a magnifying glass with everything they do,' the marketing exec said.

Customer self-service a must-have for Oracle Collaborate attendees by Companies now see online self-service options as a must-have for customer experience strategy. In late 2013, George Bisker was on the hunt for an online self-service option for customers. It could hurt the business if his company failed to keep pace with basic customer expectations like ordering products online. But Bisker, director of business systems at in Eden Prairie, Minn., wanted a technology that would work with the company's system, Oracle's, and that could handle the custom nature of Cardinal's business. Cardinal Glass sells to residential window providers such as Anderson Windows but also smaller businesses.

As a result, product orders can range from a simple pane of glass to framed windows in a variety of materials and colors. Bisker wanted an e-commerce platform that could reflect the range of customer orders without putting the onus on customers to locate product information to get the order right. Bisker also wanted to give customers useful account information without opening Pandora's box. Bisker didn't want customer service reps at Cardinal Glass flooded with follow-up phone calls about account information such as, 'Hey, I just saw online that my order is ready, on the shop floor, and it's going to X center, so why won't it be on the truck tomorrow?'

Bisker said in a session on building self-service into the company's website at the this week. Bisker didn't want too much information to defeat the purpose of a well-designed customer self-service system. Bisker believed that customer self-service was critical to business vitality. 'I believed in my heart of hearts that we had to have this capability,' he said.

But he had to persuade executives that the project had ROI. Online self-service technologies are growing in currency. But despite the clear efficiency and cost reduction that self-service can bring, companies have to travel a fine line. While enterprises are developing e-commerce sites that enable consumers to buy products, check on account issues or troubleshoot product questions via the Web, they have to be deft about the kinds of questions they route to self-service options and how they supplement self-service with other channels of interaction, such as phone calls and online content to provide answers.

Gartner research has indicated that companies enlisting self-service may be able to reduce contact center costs by up to 50%. Self-service may also drive company revenue. According to Forrester Research, 55% of U.S.

Adults are likely to abandon an online purchase if they cannot find a quick answer to their question. Moreover, according to research by MyCustomer.com and SSI, nearly 70% of consumers. E-commerce options with a (glass) ceiling At Cardinal Glass, over the past year, Bisker has built an e-commerce platform that integrates with.

Known as Cardinal Connect, the initiative enables customers to go to the company's website to purchase products, check on an order and access payment information. Not only did Bisker believe that customers needed self-service options, but Cardinal Connect also aimed to replace projects at two plants. These locations had developed their own rogue self-service Web pages in response to customer requests. But Bisker wanted to create a consistent, company-sanctioned self-service option. Using JD Edwards' Configurator, Cardinal Glass was able to integrate inventory and other ERP customer order data, enabling customers to use a simple Web page to order products or check on their accounts. But, as Bisker noted, his customer base ranges from large to small and orders products with a great deal of variety. He didn't want to have to build out Web pages with an infinite number of options to choose from.

That would likely confuse or create data errors. Instead, Bisker and his consultant team built Web forms that were table-driven based on customers' ordering history. For example, Anderson Windows could call up an order for windows with two or four planes, of a certain color and material -- without having to find the right options when they order. Instead, their order options are built into their order forms. As a result of the project, customers can order online efficiently and without some of the data errors they encountered previously, where they might order a product with insufficient data. Companies are striving to provide consumers with the right amount of information about their accounts without opening up their backend systems to a sea of customer inquiries based on that information. Further, they have to create tiers of service, where customers can use self-service options for less-complex queries while still being able to provide phone-based service for more complex issues.

Bisker said that the next step is threefold: wider adoption of the system by customers, moving the shadow-IT websites over to the Cardinal Connect system, and further development of the platform so customers have multiple points of entry to search on. For example, customers could find information not just via a purchase order or account number, but also by product name or date of order.

But, he said, he wants to proceed carefully. He doesn't want to code an infinite number of search options, but create only the search options that make sense for a broad base of customers. Self-service for self-guided robots Matt Cooper, global customer experience CRM and project manager at., also wants to use technology to enhance customer self-service options.

The company makes self-guided robots for home uses like floor cleaning as well as self-guided mechanisms that can detect bombs or aid HazMat workers in dangerous physical environments. And iRobot uses to enable customer self-service options. The company's products, Cooper said, are complex. So iRobot needs to be able to provide ample information so customers can troubleshoot their products themselves rather than resort to a phone call. On its website, the company features a fair amount of content, such as articles, videos and diagrams, so customers can walk through product issues. Cooper said that the self-service options also give reps more time for customers on the phone so that agents don't have to worry about traditional efficiency metrics like average call time and can instead focus on support quality.

Self-service options have 'reduced our call volume enough that we can focus more on the customers and on [a measure of how many interactions it takes to resolve a customer issue],' Cooper said. Cooper also plans to use analytics to bring self-service to the next level. Cooper envisions a time when he can use the reporting from the Service Cloud to see issues proactively and be able to alert customers even before they occur. 'A lot of service is going to where you can predict the customer's needs before they need it,' Cooper said To that end, iRobot is considering exploiting -connected devices, where the company's products would send constant streams of data back to company databases via the Web. For example, with IoT in place the company might be able to sense a defect in a product's operation even before a customer noticed it and schedule a service visit. Cooper said that it could take time to achieve that company's vision.

Cooper said that part of getting there will be a bit more integration of the rule base and Oracle's custom Business Objects component. The company then can create more customized alerts and reporting to get that kind of data -- and be able to respond in real time.

Discover the Chemical Composition of Everyday StuffWith a Smartphone Camera By Our smartphones can do a lot—compute, pin down our location, sense motion and orientation, send and receive wireless signals, take photographs and video. What if you could also learn exactly what chemical components were present in any object? A aims to enable just that. 'The tricorder is no longer science fiction,' a recent Tel Aviv University (TAU) article declared.

While a number devices in recent years have inspired, maybe this one is a little closer. Created by TAU engineering professor, David Mendlovic, and doctoral student, Ariel Raz, the technology is an intimate combination of innovative hardware and software. The former, a microelectromechanical () optical component, is mass producible and compatible with existing smartphone cameras. The component is a kind of miniature filter that would allow smartphone cameras to take that record the spectrum of light present in every pixel of the image. Software then creates a spectral map and compares it to a database of spectral “fingerprints” associated with substances. 'The optical element acts as a tunable filter and the software—an image fusion library—would support this new component and extract all the relevant information from the image,'.

Point a handheld computing device at an object and learn its composition. The technology behind hyperspectral technology isn’t new. USGS’s Landsat satellites, for example, have been using a similar digital imaging technique to analyze the Earth’s surface from space for decades. The Israeli device is notable, however, because it exemplifies a more general trend in sensors: What was once large, costly, and the sole domain of states is now tiny, affordable, and in our pockets. And the interesting part is that we don't know exactly how each new miniature sensor will be used. Once incorporated into smartphones and opened to app developers, old sensors rapidly find new niches. Motion sensors, for example, are now commonly used in sleep tracking apps.

GPS doesn't just locate you on a map, it also enables your phone to automatically provide local weather, time, or the nearest bus stop. What would a tricorder-like hyperspectral camera allow mobile devices to do? They would obviously be a fun novelty—great for analyzing that cocktail at happy hour. But depending on the accuracy of the device, applications range further than pure fun.

Health apps and handheld diagnostic devices come to mind. Currently, to keep track of what you’re eating, you have to manually enter each food. What if a simple photo of your plate was enough to analyze its nutritional content? It could be a great tool to spot dangerous ingredients for those with food allergies, but as it only records surface information for opaque objects (reflected light), it seems you’d never know what was lurking beneath—probably not worth the risk. Farmers might use a drone with a miniature hyperspectral camera to monitor crops. Industrial workers with smartglasses (or robots) might use a hyperspectral camera to view the chemical composition of their surroundings in augmented reality, confirming all is well or warning of invisible hazards. These applications are likely unimaginative compared to what may arise after developers take a look. 'A long list of fields stand to gain from this new technology,'.

'We predict hyperspectral imaging will play a major role in consumer electronics, the automotive industry, biotechnology, and homeland security.' Obviously, we aren't there yet.

But soon perhaps. One critical piece, yet to be fully worked out, is providing a large enough database of spectral signatures of everyday (and not so everyday) materials.

Mendlovic says his team is in talks with other organizations to help analyze images, and they are also speaking to smartphone and wearable device makers and car companies. They recently showed off a demonstration system and anticipate a prototype this June. And perhaps we can see the greater potential by looking beyond the individual sensor and seeing how it converges with other sensors to create an all-in-one, tricorder-like device. It might prove widely useful for regular folks, scientists, doctors, and starship captains (of course) to study our bodies and environments.

Image Credit. The changing role of DevOps in enterprise mobility The lines between mobile application development, enterprise mobility management and mobile infrastructure get blurrier by the day. It's hard to find the right tools to properly manage and secure apps after they've been built, and even harder to connect these apps to existing enterprise systems. This issue of Modern Mobility explores the concept of mobile DevOps -- the incorporation of management, security and infrastructure hooks into the app development process -- and how it can help businesses get more out of enterprise mobility. In this issue's two columns, Brian Katz explains how to satisfy security and usability requirements when building mobile apps, and Jack Madden describes the ins and outs of mobile application development platforms. We also talk to Good Technology's CTO about the role of enterprise mobility management in a world of connected devices.

Please read the attached whitepaper. 'Time is everything' 2015’s Most Electrifying Emerging Tech? World Economic Forum Releases Annual List By Writing lists forecasting technology is a bit like writing science fiction. Prerequisites include intimate knowledge of the bleeding edge of technology, the ability to separate signal from noise, and more than a little audacity. Looking back, they can appear adorably quaint or shockingly prescient. In either case, they’re usually as much a reflection of and commentary on the present as they are a hint at the future. What problems seemed most pressing and which solutions were most exciting? The World Economic Forum's Top 10 Emerging Technologies list isn't old enough to make serious judgements about its track record. But looking back, each year’s contribution reminded me of the year in which it was written.

Enhanced education technology, for example, was included in 2012—when the launch of Coursera, edX, and Udacity led the New York Times to dub it “The Year of the MOOC [Massive Open Online Course].” 2013’s #1 spot, OnLine Electric Vehicles, was inspired by those South Korean buses charged by the road. Two and three years on? Though much expanded, online education is still struggling to prove its worth, and road-charged vehicles remain a rarity. The former will get there, in my view, while the latter may lag in cities, where big infrastructure projects—especially on main arteries like roads—are so disruptive. But right or wrong isn’t the point (yet). These are emerging technologies.

The World Economic Forum expects most of these tools, often still in the research stage, to take anywhere from 10 to 30 years to have a broad impact. Ultimately, some will succeed, some will partially succeed, and some will fail or be replaced. Debating what should or shouldn't have been included—that’s the fun part. To that end, we've summarized each entry on this year's list below. For the list in full, and leave your thoughts in the comments. 'Fuel cell vehicles: Zero-emission cars that run on hydrogen' Long-promised, fuel cell vehicles are finally here. Various car companies are aiming to bring new models to market.

Though pricey at first ($70,000), if they prove popular, prices could fall in coming years. Fuel cells combine the most attractive characteristics of gas-powered cars and electric cars. Electric cars are criticized for low range and lengthy recharging.

Fuel cell cars, meanwhile, go 400 miles on a tank of compressed hydrogen and take minutes to refuel. They are also clean burning—replacing toxic gases, like carbon monoxide and soot, with naught but water vapor. Despite better fuel cell technology, there are a number of obstacles, largely shared by electric cars, that may prevent widespread adoption in the near term.

These include the clean, large scale production of hydrogen gas, the transportation of gas over long distances, and the construction of refueling infrastructure. 'Next-generation robotics: Rolling away from the production line' Like fuel cell vehicles, everyday robots have long commanded prime real estate in the imagination. But beyond factories, they have yet to break into the mainstream. Most are still big, dangerous, and dumb. That's changing thanks to better sensors, improved robotic bodies (often inspired by nature), increasing computing and networking power, and easier programming (it no longer takes a PhD). These new, more flexible robots are capable of tasks beyond assembly lines.

New applications span weeding and harvesting on farms to helping patients out of bed in Japanese hospitals. Prime fears include the concern robots will replace human workers or run amok. These risks may appear increasingly realistic. However, the list's writers note that prior rounds of automation tended to produce higher productivity and growth. Meanwhile, more familiarity and experience with robots may reduce fears, and a strong human-machine alliance is the more likely outcome. 'Recyclable thermoset plastics: A new kind of plastic to cut landfill waste' There are two commonly used plastics: thermoplastics (which can be reshaped and thus recycled) and thermoset plastics (which can only be shaped once and are not recyclable). The latter category are prized for durability and are widely used.

But the tradeoff for toughness is most end up as landfill. Just last year, however, researchers discovered recyclable thermoset plastics. The new category, dubbed poly(hexahydrotriazine)s, or PHTs, can be dissolved in strong acid and reused in new products.

Achieving recyclability without sacrificing durability means they may replace previously unrecyclable components. How quickly might this happen?

The list's writers predict recyclable thermoset plastics will be ubiquitous by 2025. The move could significantly reduce the amount of plastic waste in landfill across the globe. 'Precise genetic-engineering techniques: A breakthrough offers better crops with less controversy' New genetic engineering techniques are more precise and forego controversial techniques that rely on the bacterial transfer of DNA. The breakthrough CRISPR-Cas9 gene editing method, for example, uses RNA to disable or modify genes in much the same way such changes happen during natural genetic mutation.

The technique can also accurately insert new sequences or genes into a target genome. Another advance, RNA interference (RNAi), protects crops from viral infection, fungal pathogens, and pests and may reduce dependence on chemical pesticides. Major staples including wheat, rice, potatoes, and bananas may benefit from the tech. The report predicts declining controversy as genetic engineering helps boost incomes of small farmers, feeds more people, and thanks to more precise techniques, avoids transgenic plants and animals (those with foreign genetic material). Meanwhile, the tech may make agriculture more sustainable by reducing needed resources like water, land, and fertilizer. 'Additive manufacturing: The future of making things, from printable organs to intelligent clothes' 3D printing has been used in industrial prototyping for years, but now it's beginning to branch out.

3D printed objects offering greater customization, like Invisalign's tailor-made orthodontic braces, for example, are coming to market. 3D bioprinting machines are placing human cells layer-by-layer to make living tissues (e.g., skin, bone, heart, and vascular). Early applications are in the testing of new drugs, but eventually they hope to print whole, transplantable organs. Next steps include 3D printed integrated electronics, like circuit boards—though nanoscale parts, like those in processors, still face challenges—and 4D printed objects that transform themselves in accordance with environmental conditions, like heat and humidity. These might be useful in clothes or implants. Potentially disruptive to the traditional manufacturing market, additive manufacturing is still confined to limited applications in cars, aerospace, and medicine.

Still, fast growth is expected in the next decade. 'Emergent artificial intelligence: What happens when a computer can learn on the job?'

Artificial intelligence is on the rise. Microsoft, Google, Facebook, and others are developing the technology to allow machines to autonomously learn by sifting massive amounts of data. Hand-in-hand with advanced robotics, AI will boost productivity, freeing us from certain jobs and, in many cases, even doing them better.

It's thought that driverless cars, for example, will reduce accidents (which are often due to human error), and AI systems like Watson may help doctors diagnose disease. The fear we may lose control of superintelligent machines remains a hot topic, as does the concern over increasing technological unemployment and inequality. The report notes the former may yet be decades away, and despite a possibly bumpy path, the latter may make human attributes, like creativity and emotional IQ, more highly valued. The future will challenge our conception of what it means to be human and force us to deeply consider the risks and benefits of giving machines human-like intelligence. 'Distributed manufacturing: The factory of the future is online—and on your doorstep' Instead of gathering and assembling raw materials in a centralized factory and assembly line—in distributing manufacturing, materials would be spread over many hubs and products would be made near to the customer. How would it work?

'Replace as much of the material supply chain as possible with digital information.' If you've visited 3D printing marketplaces like or, you've seen the future of manufacturing.

Plans for a product are digitized in 3D modeling software and uploaded to the web. Tens of thousands of people can grab the files and make the product anywhere there's a 3D printer. This might be at a local business, university, or eventually, in our homes. Distributed manufacturing may more efficiently use resources and lower barriers to entry, allowing for increased diversity (as opposed to today's heavily standardized, assembly line products). Additionally, instead of requiring trucks, planes, and ships to move things—we'll simply zap them over the internet. This may allow for goods to rapidly travel the whole globe, even to places not currently well served.

Risks include intellectual rights violations, as we saw when music was digitized, and less control over dangerous items, like guns or other weapons. And not all items will be amenable to distributed manufacturing. Traditional methods will remain, but perhaps be much reduced in scope.

'Sense and avoid' drones: Flying robots to check power lines or deliver emergency aid' People are finding all manner of interesting non-military uses for drones: agriculture, news gathering, delivery, filming and photography. The drawback to date, however, is they still require a human pilot.

The next step? Drones that fly themselves. To do this safely, they'll need to sense and avoid obstacles.

Early prototypes are already here. Just last year, Intel and Ascending Technologies showed off drones able to navigate an obstacle course and avoid people. Once autonomous, drones can undertake dangerous tasks (without needing a human to be near). This might include checking power lines or delivering supplies after a disaster. Or drones may monitor crops and allow more efficient use of resources like water and fertilizer. Remaining challenges include making more robust systems capable of flight in adverse conditions.

Once perfected, however, drones like robots, will take the power of computing into the three dimensional physical realm and 'vastly expand our presence, productivity and human experience.' 'Neuromorophic technology: Computer chips that mimic the human brain' In some ways, the human brain remains the envy of today's most sophisticated supercomputers. It is powerful, massively parallel, and insanely energy efficient. Computers, no matter how fast they are, are still linear power hogs that substitute brute force for elegance. Serial Contpaq I Factura Electronica. But what if we could combine the two?

Neurmorphic chips, like IBM's TrueNorth, are inspired by the brain and hope to do just that. Instead of shuttling data back and forth between stored memory and central processors, neuromorphic chips combine storage and processing into the same interconnected neuron-like components. This fundamentally different chip architecture may vastly speed processing and improve machine learning.

TrueNorth has a million 'neurons' that, when working on some tasks, are hundreds of times more power efficient than conventional chips. Neuromorphic chips 'should allow more intelligent small-scale machines to drive the next stage in miniaturization and artificial intelligence.[where] computers will be able to anticipate and learn, rather than merely respond in pre-programmed ways.'

'Digital genome: Healthcare for an age when your genetic code is on a USB stick' The cost to sequence the human genome has fallen exponentially since it was first sequenced. In the beginning in cost tens or even hundreds of millions of dollars to sequence a single genome. Now, the cost is somewhere around $1,000 and a single machine can sequence tens of thousands of genomes a year. As more people get their genome sequenced, the information can be stored on a laptop, USB stick, or shared online. Quick and affordable genetic testing promises to make healthcare—from the genetic components of heart disease to cancer—more individually tailored, targeted, and effective. Prime concerns and challenges include security and privacy of personal information. Also, communication of genetic risks and educating people on what those risks mean will be critical.

In aggregate, however, the list's authors say it is more likely the benefits of personalized medicine will outweigh the risks. The Top 10 Emerging Technologies of 2015 list was compiled by the World Economic Forum's.

To learn more, be sure to. Image Credit. Organizations are struggling with a fundamental challenge – there’s far more data than they can handle.

Sure, there’s a shared vision to analyze structured and unstructured data in support of better decision making but is this a reality for most companies? The big data tidal wave is transforming the database management industry, employee skill sets, and business strategy as organizations race to unlock meaningful connections between disparate sources of data. Graph databases are rapidly gaining traction in the market as an effective method for deciphering meaning but many people outside the space are unsure of what exactly this entails.

Generally speaking, graph databases store data in a graph structure where entities are connected through relationships to adjacent elements. The Web is a graph; also your friend-of-a-friend network and the road network are graphs. The fact is, we all encounter the principles of graph databases in many aspects of our everyday lives, and this familiarity will only increase. Consider just a few examples: • Facebook, Twitter and other social networks all employ graphs for more specific, relevant search functionality. Results are ranked and presented to us to help us discover things. • By 2020, it is predicted that the number of connected devices will reach nearly 75 billion globally.

As the Internet of Things continues to grow, it is not the devices themselves that will dramatically change the ways in which we live and work, but the connections between these devices. Think healthcare, work productivity, entertainment, education and beyond. • There are over 40,000 Google searches processed every second. This results in 3.5 billion searches per day and 1.2 trillion searches per year worldwide. Online search is ubiquitous in terms of information discovery. As people not only perform general Google searches, but search for content within specific websites, graph databases will be instrumental in driving more relevant, comprehensive results. This is game changing for online publishers, healthcare providers, pharma companies, government and financial services to name a few.

• Many of the most popular online dating sites leverage graph database technology to cull through the massive amounts of personal information users share to determine the best romantic matches. Because relationships matter. In the simplest terms, graph databases are all about relationships between data points.

Think about the graphs we come across every day, whether in a business meeting or news report. Graphs are often diagrams demonstrating and defining pieces of information in terms of their relations to other pieces of information. Traditional relational databases can easily capture the relationship between two entities but when the object is to capture “many-to-many” relationships between multiple points of data, queries take a long time to execute and maintenance is quite challenging. For instance, if you wanted to search for friends on many social networks that both attended the same university AND live in San Francisco AND share at least three mutual friends. Graph databases can execute these types of queries instantly with just a few lines of code or mouse clicks. The implications across industries are tremendous.

Graph databases are gaining in popularity for a variety of reasons. Many are schema-less allowing you to manage your data more efficiently. Many support a powerful query language, SPARQL.

Some allow for simultaneous graph search and full-text search of content stores. Some exhibit enterprise resilience, replication and highly scalable simultaneous reads and writes. And some have other very special features worthy of further discussion. One specialized form of graph database is an RDF triplestore. This may sound like a foreign language, but at the root of these databases are concepts familiar to all of us. Consider the sentence, “Fido is a dog.” This sentence structure – subject-predicate-object – is how we speak naturally and is also how data is stored in a triplestore. Nearly all data can be expressed in this simple, atomic form.

Now let’s take this one step further. Consider the sentence, “All dogs are mammals.” Many triplestores can reason just the way humans can. They can come to the conclusion that “Fido is a mammal.” What just happened? An RDF triplestore used its “reasoning engine” to infer a new fact.

These new facts can be useful in providing answers to queries such as “What types of mammals exist?” In other words, the “knowledge base” was expanded with related, contextual information. With so many organizations interested in producing new information products, this process of “inference” is a very important aspect of RDF triplestores. But where do the original facts come from? Since documents, articles, books and e-mails all contain free flowing text, imagine a technology where the text can be analyzed with results stored inside the RDF triplestore for later use. Imagine a technology that can create the semantic triples for reuse later.

The breakthrough here is profound on many levels: 1) text mining can be tightly integrated with RDF triplestores to automatically create and store useful facts and 2) RDF triplestores not only manage those facts but they also “reason” and therefore extend the knowledge base using inference. Why is this groundbreaking? The full set of reasons extends beyond the scope of this article but here are some of the most important: Your unstructured content is now discoverable allowing all types of users to quickly find the exact information for which they are searching. This is a monumental breakthrough since so much of the data that organizations stockpile today exist as dark data repositories. We said earlier that RDF triplestores are a type of graph database.

By their very nature, the triples stored inside the graph database (think “facts” in the form of subject-predicate-object) are connected. “Fido is a dog. All dogs are mammals. Mammals are warm blooded. Mammals have different body temperatures, etc” The facts are linked. These connections can be measured. Some entities are more connected than others just like some web pages are more connected to other web pages.

Because of this, metrics can be used to rank the entries in a graph database. One of the most popular (and first) algorithms used at Google is “Page Rank” which counts the number and quality of links to a page – an important metric in assessing the importance of web page.

Similarly, facts inside a triplestore can be ranked to identify important interconnected entities with the most connected ordered first. There are many ways to measure the entities but this is one very popular use case. With billions of facts referencing connected entities inside a graph database, this information source can quickly become the foundation for knowledge discovery and knowledge management. Today, organizations can structure their unstructured data, add additional free facts from Linked Open Data sets, combine all of this with a controlled vocabulary, thesauri, taxonomies or ontologies which, to one degree or another, are used to classify the stored entities and depict relationships.

Real knowledge is then surfaced from the results of queries, visual analysis of graphs or both. Everything is indexed inside the triplestore. Graph databases (and specialized versions called native RDF triplestores that embody reasoning power) show great promise in knowledge discovery, data management and analysis. They reveal simplicity within complexity.

When combined with text mining, their value grows tremendously. As the database ecosystem continues to grow, as more and more connections are formed, as unstructured data multiplies with fury, the need to analyze text and structure results inside graph databases is becoming an essential part of the database ecosystem. Today, these combined technologies are available and not just reserved for the big search engines providers. It may be time for you to consider how to better store, manage, query and analyze your own data. Graph databases are the answer. Tony Agresta is the Managing Director of USA.

Ontotext was established in 2000 to address challenges in semantic technology using text mining and graph databases.