"So much joy I cry, so much pain I laugh."
The ink of the scholar is more sacred than the blood of the martyr.
Remember, you need more than one note to make beautiful music.
Love is the missing link!
Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...
Billionaire ex-Facebook president Sean Parker unloads on Mark Zuckerberg and admits he helped build a monsterI'd like to point out- changing or abandoning social medias advertising models will not stop the beast... The medium is the message.
Sean Parker, the first president of Facebook, has a disturbing warning about the social network: "God only knows what it's doing to our children's brains."
Speaking to the news website Axios, the entrepreneur and executive talked openly about what he perceives as the dangers of social media and how it exploits human "vulnerability."
"The thought process that went into building these applications, Facebook being the first of them ... was all about: 'How do we consume as much of your time and conscious attention as possible?'" said Parker, who joined Facebook in 2004, when it was less than a year old.
"And that means that we need to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever," he told Axios. "And that's going to get you to contribute more content, and that's going to get you ... more likes and comments."
Parker added: "It's a social-validation feedback loop ... exactly the kind of thing that a hacker like myself would come up with, because you're exploiting a vulnerability in human psychology."
"The inventors, creators — it's me, it's Mark [Zuckerberg], it's Kevin Systrom on Instagram, it's all of these people — understood this consciously," he said. "And we did it anyway."
Facebook did not immediately respond to Business Insider's request for comment.
Some in tech are growing disillusioned — and worried
Parker isn't the only tech figure to express disillusionment and worry by what they helped create. Tristan Harris, a former Google employee, has been outspoken in his criticism of how tech companies' products hijack users' minds.
"If you're an app, how do you keep people hooked? Turn yourself into a slot machine," he wrote in a widely shared Medium post in 2016.
"We need our smartphones, notifications screens and web browsers to be exoskeletons for our minds and interpersonal relationships that put our values, not our impulses, first," he continued. "People's time is valuable. And we should protect it with the same rigor as privacy and other digital rights."
In a recent feature, The Guardian spoke to tech workers and industry figures who have been critical of Silicon Valley business practices.
Loren Brichter, the designer who created the slot-machine-like pull-down-to-refresh mechanism now widely used on smartphones, said, "I've spent many hours and weeks and months and years thinking about whetheranything I've done has made a net positive impact on society or humanity at all."
Brichter added: "Pull-to-refresh is addictive. Twitter is addictive. These are not good things. When I was working on them, it was not something I was mature enough to think about. I'm not saying I'm mature now, but I'm a little bit more mature, and I regret the downsides."
And Roger McNamee, an investor in Facebook and Google, told The Guardian: "The people who run Facebook and Google are good people, whose well-intentioned strategies have led to horrific unintended consequences ... The problem is that there is nothing the companies can do to address the harm unless they abandon their current advertising models."
The comments from Parker and others are further evidence of souring public sentiment about Silicon Valley. Once lauded in utopian terms, companies like Facebook have now come under heavy criticism over their role in the spread of "fake news" and Russian propaganda.
http://www.businessinsider.com/ex-faceb ... ty-2017-11
Why the modern world is bad for your brain
Although we think we’re doing several things at once, multitasking, this is a powerful and diabolical illusion. Earl Miller, a neuroscientist at MIT and one of the world experts on divided attention, says that our brains are “not wired to multitask well… When people think they’re multitasking, they’re actually just switching from one task to another very rapidly. And every time they do, there’s a cognitive cost in doing so.” So we’re not actually keeping a lot of balls in the air like an expert juggler; we’re more like a bad amateur plate spinner, frantically switching from one task to another, ignoring the one that is not right in front of us but worried it will come crashing down any minute. Even though we think we’re getting a lot done, ironically, multitasking makes us demonstrably less efficient.
Multitasking has been found to increase the production of the stress hormone cortisol as well as the fight-or-flight hormone adrenaline, which can overstimulate your brain and cause mental fog or scrambled thinking. Multitasking creates a dopamine-addiction feedback loop, effectively rewarding the brain for losing focus and for constantly searching for external stimulation. To make matters worse, the prefrontal cortex has a novelty bias, meaning that its attention can be easily hijacked by something new – the proverbial shiny objects we use to entice infants, puppies, and kittens. The irony here for those of us who are trying to focus amid competing activities is clear: the very brain region we need to rely on for staying on task is easily distracted. We answer the phone, look up something on the internet, check our email, send an SMS, and each of these things tweaks the novelty- seeking, reward-seeking centres of the brain, causing a burst of endogenous opioids (no wonder it feels so good!), all to the detriment of our staying on task. It is the ultimate empty-caloried brain candy. Instead of reaping the big rewards that come from sustained, focused effort, we instead reap empty rewards from completing a thousand little sugar-coated tasks.
In the old days, if the phone rang and we were busy, we either didn’t answer or we turned the ringer off. When all phones were wired to a wall, there was no expectation of being able to reach us at all times – one might have gone out for a walk or been between places – and so if someone couldn’t reach you (or you didn’t feel like being reached), it was considered normal. Now more people have mobile phones than have toilets. This has created an implicit expectation that you should be able to reach someone when it is convenient for you, regardless of whether it is convenient for them. This expectation is so ingrained that people in meetings routinely answer their mobile phones to say, “I’m sorry, I can’t talk now, I’m in a meeting.” Just a decade or two ago, those same people would have let a landline on their desk go unanswered during a meeting, so different were the expectations for reachability.
Just having the opportunity to multitask is detrimental to cognitive performance. Glenn Wilson, former visiting professor of psychology at Gresham College, London, calls it info-mania. His research found that being in a situation where you are trying to concentrate on a task, and an email is sitting unread in your inbox, can reduce your effective IQ by 10 points. And although people ascribe many benefits to marijuana, including enhanced creativity and reduced pain and stress, it is well documented that its chief ingredient, cannabinol, activates dedicated cannabinol receptors in the brain and interferes profoundly with memory and with our ability to concentrate on several things at once. Wilson showed that the cognitive losses from multitasking are even greater than the cognitive losses from pot‑smoking.
Russ Poldrack, a neuroscientist at Stanford, found that learning information while multitasking causes the new information to go to the wrong part of the brain. If students study and watch TV at the same time, for example, the information from their schoolwork goes into the striatum, a region specialised for storing new procedures and skills, not facts and ideas. Without the distraction of TV, the information goes into the hippocampus, where it is organised and categorised in a variety of ways, making it easier to retrieve. MIT’s Earl Miller adds, “People can’t do [multitasking] very well, and when they say they can, they’re deluding themselves.” And it turns out the brain is very good at this deluding business.
Then there are the metabolic costs that I wrote about earlier. Asking the brain to shift attention from one activity to another causes the prefrontal cortex and striatum to burn up oxygenated glucose, the same fuel they need to stay on task. And the kind of rapid, continual shifting we do with multitasking causes the brain to burn through fuel so quickly that we feel exhausted and disoriented after even a short time. We’ve literally depleted the nutrients in our brain. This leads to compromises in both cognitive and physical performance. Among other things, repeated task switching leads to anxiety, which raises levels of the stress hormone cortisol in the brain, which in turn can lead to aggressive and impulsive behaviour. By contrast, staying on task is controlled by the anterior cingulate and the striatum, and once we engage the central executive mode, staying in that state uses less energy than multitasking and actually reduces the brain’s need for glucose.
To make matters worse, lots of multitasking requires decision-making: Do I answer this text message or ignore it? How do I respond to this? How do I file this email? Do I continue what I’m working on now or take a break? It turns out that decision-making is also very hard on your neural resources and that little decisions appear to take up as much energy as big ones. One of the first things we lose is impulse control. This rapidly spirals into a depleted state in which, after making lots of insignificant decisions, we can end up making truly bad decisions about something important. Why would anyone want to add to their daily weight of information processing by trying to multitask?
In discussing information overload with Fortune 500 leaders, top scientists, writers, students, and small business owners, email comes up again and again as a problem. It’s not a philosophical objection to email itself, it’s the mind-numbing number of emails that come in. When the 10-year-old son of my neuroscience colleague Jeff Mogil (head of the Pain Genetics lab at McGill University) was asked what his father does for a living, he responded, “He answers emails.” Jeff admitted after some thought that it’s not so far from the truth. Workers in government, the arts, and industry report that the sheer volume of email they receive is overwhelming, taking a huge bite out of their day. We feel obliged to answer our emails, but it seems impossible to do so and get anything else done.
Before email, if you wanted to write to someone, you had to invest some effort in it. You’d sit down with pen and paper, or at a typewriter, and carefully compose a message. There wasn’t anything about the medium that lent itself to dashing off quick notes without giving them much thought, partly because of the ritual involved, and the time it took to write a note, find and address an envelope, add postage, and take the letter to a mailbox. Because the very act of writing a note or letter to someone took this many steps, and was spread out over time, we didn’t go to the trouble unless we had something important to say. Because of email’s immediacy, most of us give little thought to typing up any little thing that pops in our heads and hitting the send button. And email doesn’t cost anything.
Sure, there’s the money you paid for your computer and your internet connection, but there is no incremental cost to sending one more email. Compare this with paper letters. Each one incurred the price of the envelope and the postage stamp, and although this doesn’t represent a lot of money, these were in limited supply – if you ran out of them, you’d have to make a special trip to the stationery store and the post office to buy more, so you didn’t use them frivolously. The sheer ease of sending emails has led to a change in manners, a tendency to be less polite about what we ask of others. Many professionals tell a similar story. One said, “A large proportion of emails I receive are from people I barely know asking me to do something for them that is outside what would normally be considered the scope of my work or my relationship with them. Email somehow apparently makes it OK to ask for things they would never ask by phone, in person, or in snail mail.”
There are also important differences between snail mail and email on the receiving end. In the old days, the only mail we got came once a day, which effectively created a cordoned-off section of your day to collect it from the mailbox and sort it. Most importantly, because it took a few days to arrive, there was no expectation that you would act on it immediately. If you were engaged in another activity, you’d simply let the mail sit in the box outside or on your desk until you were ready to deal with it. Now email arrives continuously, and most emails demand some sort of action: Click on this link to see a video of a baby panda, or answer this query from a co-worker, or make plans for lunch with a friend, or delete this email as spam. All this activity gives us a sense that we’re getting things done – and in some cases we are. But we are sacrificing efficiency and deep concentration when we interrupt our priority activities with email.
Until recently, each of the many different modes of communication we used signalled its relevance, importance, and intent. If a loved one communicated with you via a poem or a song, even before the message was apparent, you had a reason to assume something about the nature of the content and its emotional value. If that same loved one communicated instead via a summons, delivered by an officer of the court, you would have expected a different message before even reading the document. Similarly, phone calls were typically used to transact different business from that of telegrams or business letters. The medium was a clue to the message. All of that has changed with email, and this is one of its overlooked disadvantages – because it is used for everything. In the old days, you might sort all of your postal mail into two piles, roughly corresponding to personal letters and bills. If you were a corporate manager with a busy schedule, you might similarly sort your telephone messages for callbacks. But emails are used for all of life’s messages. We compulsively check our email in part because we don’t know whether the next message will be for leisure/amusement, an overdue bill, a “to do”, a query… something you can do now, later, something life-changing, something irrelevant.
This uncertainty wreaks havoc with our rapid perceptual categorisation system, causes stress, and leads to decision overload. Every email requires a decision! Do I respond to it? If so, now or later? How important is it? What will be the social, economic, or job-related consequences if I don’t answer, or if I don’t answer right now?
Now of course email is approaching obsolescence as a communicative medium. Most people under the age of 30 think of email as an outdated mode of communication used only by “old people”. In its place they text, and some still post to Facebook. They attach documents, photos, videos, and links to their text messages and Facebook posts the way people over 30 do with email. Many people under 20 now see Facebook as a medium for the older generation.
For them, texting has become the primary mode of communication. It offers privacy that you don’t get with phone calls, and immediacy you don’t get with email. Crisis hotlines have begun accepting calls from at-risk youth via texting and it allows them two big advantages: they can deal with more than one person at a time, and they can pass the conversation on to an expert, if needed, without interrupting the conversation.
But texting suffers from most of the problems of email and then some. Because it is limited in characters, it discourages thoughtful discussion or any level of detail. And the addictive problems are compounded by texting’s hyperimmediacy. Emails take some time to work their way through the internet and they require that you take the step of explicitly opening them. Text messages magically appear on the screen of your phone and demand immediate attention from you. Add to that the social expectation that an unanswered text feels insulting to the sender, and you’ve got a recipe for addiction: you receive a text, and that activates your novelty centres. You respond and feel rewarded for having completed a task (even though that task was entirely unknown to you 15 seconds earlier). Each of those delivers a shot of dopamine as your limbic system cries out “More! More! Give me more!”
In a famous experiment, my McGill colleagues Peter Milner and James Olds, both neuroscientists, placed a small electrode in the brains of rats, in a small structure of the limbic system called the nucleus accumbens. This structure regulates dopamine production and is the region that “lights up” when gamblers win a bet, drug addicts take cocaine, or people have orgasms – Olds and Milner called it the pleasure centre. A lever in the cage allowed the rats to send a small electrical signal directly to their nucleus accumbens. Do you think they liked it? Boy how they did! They liked it so much that they did nothing else. They forgot all about eating and sleeping. Long after they were hungry, they ignored tasty food if they had a chance to press that little chrome bar; they even ignored the opportunity for sex. The rats just pressed the lever over and over again, until they died of starvation and exhaustion. Does that remind you of anything? A 30-year-old man died in Guangzhou (China) after playing video games continuously for three days. Another man died in Daegu (Korea) after playing video games almost continuously for 50 hours, stopped only by his going into cardiac arrest.
Each time we dispatch an email in one way or another, we feel a sense of accomplishment, and our brain gets a dollop of reward hormones telling us we accomplished something. Each time we check a Twitter feed or Facebook update, we encounter something novel and feel more connected socially (in a kind of weird, impersonal cyber way) and get another dollop of reward hormones. But remember, it is the dumb, novelty-seeking portion of the brain driving the limbic system that induces this feeling of pleasure, not the planning, scheduling, higher-level thought centres in the prefrontal cortex. Make no mistake: email-, Facebook- and Twitter-checking constitute a neural addiction.
https://www.theguardian.com/science/201 ... n-overload
Nissan’s Brain-To-Vehicle Interface Could Make Driving Safer by Scanning Your Brain
The automaker announced at beginning of January 2018 that it is developing a Brain-to-Vehicle (B2V) interface that, if implemented, would increase a driver’s reaction times to make driving safer.
This human driver—semi-autonomous collaboration would see the latter predicting the former’s actions — be it turning the steering wheel or applying the brakes — by reading and interpreting their brain signals using Electroencephalography (EEG) technology. Upon doing so, the semi-autonomous vehicle would start those actions 0.2 to 0.5 seconds sooner. The automaker calls it, “Nissan Intelligent Mobility.” When it autonomous mode, the system could also adjust detect driver discomfort and adjust its driving style accordingly, or use augmented reality to alter what the driver sees.
https://futurism.com/nissans-brain-vehi ... ur-brains/
GM Will Launch Robocars Without Steering Wheels Next Year
After more than a century making vehicles for humans to drive, General Motors has ripped the heart out of its latest ride, and is now holding the grisly spectacle up for all the world to see: A car with no steering wheel. And it plans to put a fleet of these newfangled things to work in a taxi-like service, somewhere in the US, next year.
And no, this robo-chariot, a modified all-electric Chevrolet Bolt, doesn't have pedals either. This is GM's truly driverless debut, a car that will have to handle the world on its own. No matter what happens, you, dear human passenger, cannot help it now.
Terrifying? Maybe. But it's also a major step in GM’s aggressive bid to maintain its big dog status as the auto industry evolves away from individual ownership and flesh-and-blood drivers. And it’s just the beginning for the Detroit stalwart. “We’ve put together four generations of autonomous vehicles over the course of 18 months,” says Dan Ammann, GM’s president. “You can safely assume that the fourth generation won’t be the last.”
https://www.wired.com/story/gm-cruise-s ... unch-2019/
2018: The Year Blockchain, AI and IoT Converge,
But will it be centralized?
Decentralization, by its very nature, requires that more intelligence shifts to nodes instead of residing in one central server.
We will continue to see the development of semiconductors that are capable of advanced computing in smaller and smaller devices. As devices at the edge become smarter, the smart contracts enabled by blockchain platforms will work better with more advanced data analytics capabilities.
I see a mini-brain in each of our devices, ranging from simplistic ones to ones capable of processing larger datasets and making decisions based on that data.
The open availability of more data and smarter processing at the nodes will enable broader datasets available to more companies and people, instead of proprietary data ownership that currently exists within companies such as Facebook and Google. More importantly, that data will be diverse and representative of the world we live in, instead of being filtered by a few companies that reside in one geography.
While this may not all happen within the next year, we have started an inevitable march towards that future, one that will be even more transformative than the internet was.
https://www.coindesk.com/2018-year-bloc ... -converge/
Doctors will have to be more like actors as A.I. gains momentum:
Zocdoc CEO Oliver Kharraz fully expects machine learning to take over many clinical functions. For doctors, that means emotional intelligence is going to become increasingly important.
"Doctors in the future will come from the same pool as actors," said Kharraz, in an interview on Thursday at the J.P. Morgan Healthcare conference in San Francisco. Zocdoc's web-based software lets consumers book medical appointments online.
Kharraz, who is also a doctor by training, said there's going to be a shift in the type of personalities attracted to medicine as machines start doing things like diagnosing medical conditions by analyzing scans.
To be successful, doctors are going to need empathy and an ability to listen.
https://www.cnbc.com/2018/01/11/zocdoc- ... er-eq.html
Amazon has created a new computing platform that will future-proof your home
Amazon is in a better position than any other company to dominate ambient computing, the concept that everything in your life is computerized and intelligent.
But I think it's time to add one more category to the list: ambient computing, or the concept that there can be a layer of intelligence powering everything in your home from your lights to your thermostat. Many see this as a new phase of computing where our technology works for us automatically. We're in the early days of ambient computing, but there's already a clear front-runner powering its future: Amazon Alexa.
Read more: http://www.businessinsider.com/amazon-a ... ome-2018-1
Amazon Officially Eliminates Cashiers...
Do you own a grocery store? You better pay attention. Same for you, merchants. As consumers get more and more used to self-service automated stores you’re going to need to respond. But don’t fire your employees yet – people still enjoy engaging with humans, so maybe you can figure out a balance between technology and human interaction.
Read more: https://www.forbes.com/sites/quickerbet ... d1e13c64f2
Intel’s quantum computing efforts take a major step forward
All of these efforts then, are baby steps along the way to true quantum computing. Intel isn't the only one pursuing this goal -- IBM happens to have a giant 50-qubit quantum computer hanging around at CES -- and the competition among tech giants to own this next generation of computing can only make it come more quickly.
Read more: https://www.engadget.com/2018/01/08/int ... p-forward/
5G phones expected in 2019 thanks to Chinese, Qualcomm pact
The reality of a 5G phone is closer than you think.
Mobile chip giant Qualcomm on Thursday announced a partnership with several of the largest Chinese phone manufacturers, including Lenovo (Motorola's parent), Xiaomi, ZTE, Oppo (OnePlus' owner) and Vivo, to build 5G phones as early as 2019.
Under the "5G Pioneer" initiative, Qualcomm will help create a platform for the companies to build phones running on 5G, the next generation of wireless technology that promises more speed and better coverage. 5G, one of the hottest trends in tech, is considered the foundation for a number of growing segments such as self-driving cars and artificial intelligence.
Read more: https://www.cnet.com/news/5g-phones-com ... oppo-vivo/
Meet the newest recruit of Dubai’s police force: Dubai will now police streets with self-driving robo-cars (with facial-recognition tech)
Earlier in June, Dubai Police inducted its first robotic police officer into its ranks. This month, city officials announced that autonomous police cars will begin patrolling the streets of Dubai by the end of the year to help identify and track suspects. Named the O-R3, the patrol car can navigate on its own using machine-learning algorithms and comes with a built-in aerial drone and facial-recognition technology to follow targets (and surveil areas and people) off-road. While the O-R3s will patrol the city on its own and use biometric software to scan individuals it comes across, the driverless car will still be controlled by the police remotely from behind a computer dashboard.
https://www.gqindia.com/content/meet-ne ... ice-force/
Ford wants to patent a driverless police car that ambushes lawbreakers using artificial intelligence
Imagine a police car that issues tickets without even pulling you over.
What if the same car could use artificial intelligence to find good hiding spots to catch traffic violators and identify drivers by scanning license plates, tapping into surveillance cameras and wirelessly accessing government records?
What if a police officer tapping on your car window asking for your license and registration became a relic of transportation’s past?
The details may sound far-fetched, as if they belong in the science-fiction action flick “Demolition Man” or a new dystopian novel inspired by Aldous Huxley’s “Brave New World,” but these scenarios are grounded in a potential reality. They come from a patent developed by Ford and being reviewed by the U.S. government to create autonomous police cars. Ford’s patent application was published this month.
https://www.washingtonpost.com/news/inn ... 9ce1c51e58
5th Gen wireless communications
The big four national carriers are in a race to deploy next-generation 5G wireless networks, which should bring faster speeds, superior responsiveness and better coverage. 5G is seen as the foundational technology for areas like self-driving cars and streaming virtual reality, and it starts with these early deployments.
So far Verizon has mostly talked about its plans to roll out 5G as a broadband replacement, with only a limited mobile 5G service this year. Meanwhile, T-Mobile and Sprint, which have announced plans to merger, are setting things up now for a commercial launch early next year. AT&T has said it will launch 5G in a dozen cities this year.
But consumers won't see any real benefits of these deployments until 2019 when the first 5G-capable smartphones hit the market.
https://www.cnet.com/news/verizon-sees- ... 5g-launch/
WhatsApp leads mobs to murder
In India, false rumors about child kidnappers have gone viral on WhatsApp, prompting fearful mobs to kill two dozen innocent people since April.
The phenomenon is part of a trend of false information flooding social media in recent years, which has incited violence from Brazil to Sri Lanka.
The messages in India have preyed on a universal fear: harm coming to a child. Some of the false messages on WhatsApp described gangs of kidnappers on the prowl. Others included videos showing people driving up and snatching children.
https://www.wraltechwire.com/2018/07/21 ... -in-india/
US military agency invests $100m in genetic extinction technologies
Technology could be used to wipe out malaria carrying mosquitos or other pests but UN experts say fears over possible military uses and unintended consequences strengthen case for a ban
Cutting-edge gene editing tools such as Crispr-Cas9 work by using a synthetic ribonucleic acid (RNA) to cut into DNA strands and then insert, alter or remove targeted traits. These might, for example, distort the sex-ratio of mosquitoes to effectively wipe out malarial populations.
“The dual use nature of altering and eradicating entire populations is as much a threat to peace and food security as it is a threat to ecosystems,” he said. “Militarisation of gene drive funding may even contravene the Enmod convention against hostile uses of environmental modification technologies.”
https://www.theguardian.com/science/201 ... chnologies
DARPA’s latest endeavor is a tiny robotics challenge
With an invention history that can claim a pioneering role in the development of the internet, Siri, GPS and other world-changing inventions, the U.S. government agency known as DARPA (Defense Advanced Research Projects Agency) has always thought big. Until now, at least. With its new SHRIMP program, DARPA is suddenly thinking very small indeed — and that’s really exciting.
The SHRIMP program — short for SHort-Range Independent Microrobotic Platforms — is an effort to develop new insect-scale robots for operating in environments where much larger robots may be less effective. In the tradition of its DARPA Grand Challenges, the organization is seeking proposals for suitable robots, in this case ones that weigh less than a gram and are smaller than one cubic centimeter. The selected micro-bots will then compete against one another in a “series of Olympic-themed competitions,” including categories like rock piling, steeplechase, vertical ascent, shot put, weightlifting and more.
https://www.digitaltrends.com/cool-tech ... -olympics/
DARPA Is Funding Research Into AI That Can Explain What It’s “Thinking”
These third wave systems would “think” rather than just churn out answers based on whatever datasets they’re fed (and as we’ve seen in the past, these datasets can include the biases of their creators). Ultimately, it is the next step to creating AIs that can reason and engage in abstract thought, which could improve how both the military and everyone else makes use of AI.
Why Universities Need To Prepare Students For The New AI World
Artificial intelligence is increasingly embedded in our consumer and business lives, and it is poised to transform how societies function in the years to come. Yet universities are not adequately preparing students for a changing world. To better prepare students for a changing world, AI needs to be increasingly embedded into higher education.
For students, AI will inevitably impact their careers. Those interested in careers in AI could pursue a wide range of exciting new career possibilities focused on data science, machine learning or advanced statistics. And, even students not focused on AI would benefit from a sound education in artificial intelligence and familiarity with working with machines.
The AI era will inevitably create new job types, ranging from machine regulators to emotion engineers. To succeed, all students will need to understand, at least at a high level, how machines perform. In addition, they should better equip themselves to do what machines cannot do.
https://www.forbes.com/sites/stephanieg ... e00b8d6bc8
World-first quantum computer simulation of chemical bonds using trapped ions
Quantum chemistry expected to be one of the first applications of full-scale quantum computers
An international group of researchers has achieved the world's first multi-qubit demonstration of a quantum chemistry calculation performed on a system of trapped ions, one of the leading hardware platforms in the race to develop a universal quantum computer.
https://www.sciencedaily.com/releases/2 ... 110028.htm
MIT Uses Nanotech to Miniaturize Electronics Into Spray Form
The 'aerosolized electronics' are so small they can be sprayed through the air. MIT researchers say the tiny devices could be used to in oil and gas pipelines or even in the human digestive system to detect problems.
https://www.pcmag.com/news/362652/mit-u ... spray-form
Researchers find quantum 'Maxwell's demon' may give up information to extract work
His team wanted to know if it would be possible to use information to extract work in this way on a quantum scale, too, but not by sorting fast and slow molecules. If a particle is in an excited state, they could extract work by moving it to a ground state. (If it was in a ground state, they wouldn't do anything and wouldn't expend any work).
But they wanted to know what would happen if the quantum particles were in an excited state and a ground state at the same time, analogous to being fast and slow at the same time. In quantum physics, this is known as a superposition.
"Can you get work from information about a superposition of energy states?" Murch asked. "That's what we wanted to find out."
There's a problem, though. On a quantum scale, getting information about particles can be a bit … tricky.
"Every time you measure the system, it changes that system," Murch said. And if they measured the particle to find out exactly what state it was in, it would revert to one of two states: excited, or ground.
This effect is called quantum backaction. To get around it, when looking at the system, researchers (who were the "demons") didn't take a long, hard look at their particle. Instead, they took what was called a "weak observation." It still influenced the state of the superposition, but not enough to move it all the way to an excited state or a ground state; it was still in a superposition of energy states. This observation was enough, though, to allow the researchers track with fairly high accuracy, exactly what superposition the particle was in—and this is important, because the way the work is extracted from the particle depends on what superposition state it is in.
To get information, even using the weak observation method, the researchers still had to take a peek at the particle, which meant they needed light. So they sent some photons in, and observed the photons that came back.
"But the demon misses some photons," Murch said. "It only gets about half.
The other half are lost." But—and this is the key—even though the researchers didn't see the other half of the photons, those photons still interacted with the system, which means they still had an effect on it. The researchers had no way of knowing what that effect was.
They took a weak measurement and got some information, but because of quantum backaction, they might end up knowing less than they did before the measurement. On the balance, that's negative information.
And that's weird.
"Do the rules of thermodynamics for a macroscopic, classical world still apply when we talk about quantum superposition?" Murch asked. "We found that yes, they hold, except there's this weird thing. The information can be negative.
"I think this research highlights how difficult it is to build a quantum computer," Murch said.
"For a normal computer, it just gets hot and we need to cool it. In the quantum computer you are always at risk of losing information."
Read more at: https://phys.org/news/2018-07-quantum-m ... n.html#jCp
Virtual Reality Has Reached A “Tipping Point.” It’s Officially Here to Stay.
Thanks to virtual reality, you can swim with the dolphins, play some tennis, or spend some alone time, all from the comfort of your own living room. But it’s not yet perfect — a horrible wave of nausea can hit anytime, right in the middle of these activities.
“VR isn’t where I want it to be, but this current generation of products — I think it’s proved that VR is real,” David Ewalt told Futurism. Ewalt is a writer and journalist who focuses on new technology. His book on virtual reality, “Defying Reality” was published on July 17.
“It’s not hype anymore,” he added. “It needs to get much better, but I think we reached that tipping point where you can try the products we have now and say, ‘damn, that really works. VR is real.’”
Magic Leap Headset Test Drive: Off Your Phone and Into Your World
There he is, the size of a Candy Land piece, right on the ottoman in front of me: teeny, tiny LeBron James. He jets down the Golden State Warriors’ court—sitting flush on the chocolate leather—and dunks in a hoop the size of my wedding band.
No, I haven’t had a psychedelic sandwich for lunch. I’ve just been wearing what looks like a pair of oversize swim goggles, attached to a Discman thingy on my hip—the Magic Leap One Creator Edition.
These augmented-reality goggles put virtual objects in the real world, unlike virtual-reality goggles, which block it out. Think “Pokémon Go” but far more realistic and potentially useful.
If you haven’t been following Silicon Valley’s mounting interest in AR, it’s time. Microsoft ’s HoloLens headset is starting to pick up steam in enterprise applications. Apple has big plans in the space. And Magic Leap, while a no-name to most, has received nutty amounts of cash, and a lot of buzz in the tech community. Since 2011, the company has raised over $2.3 billion dollars—including funds from Google and AT&T —on the promise of its mysterious “Lightfield” technology.
https://www.wsj.com/articles/magic-leap ... 1533730080
The “neuropolitics” consultants who hack voters’ brains
These experts say they can divine political preferences you can’t express from signals you don’t know you’re producing.
Maria Pocovi slides her laptop over to me with the webcam switched on. My face stares back at me, overlaid with a grid of white lines that map the contours of my expression. Next to it is a shaded window that tracks six “core emotions”: happiness, surprise, disgust, fear, anger, and sadness. Each time my expression shifts, a measurement bar next to each emotion fluctuates, as if my feelings were an audio signal. After a few seconds, a bold green word flashes in the window: ANXIETY. When I look back at Pocovi, I get the sense she knows exactly what I’m thinking with one glance.
Recommended for You
Petite with a welcoming smile, Pocovi, the founder of Emotion Research Lab in Valencia, Spain, is a global entrepreneur par excellence. When she comes to Silicon Valley, she doesn’t even rent an office—she just grabs a table here at the Plug and Play coworking space in Sunnyvale, California. But the technology she’s showing me is at the forefront of a quiet political revolution. Campaigns around the world are employing Emotion Research Lab and other marketers versed in neuroscience to penetrate voters’ unspoken feelings.
This spring there was a widespread outcry when American Facebook users found out that information they had posted on the social network—including their likes, interests, and political preferences—had been mined by the voter-targeting firm Cambridge Analytica. While it’s not clear how effective they were, the company’s algorithms may have helped fuel Donald Trump’s come-from-behind victory in 2016.
But to ambitious data scientists like Pocovi, who has worked with major political parties in Latin America in recent elections, Cambridge Analytica, which shut down in May, was behind the curve. Where it gauged people’s receptiveness to campaign messages by analyzing data they typed into Facebook, today’s “neuropolitical” consultants say they can peg voters’ feelings by observing their spontaneous responses: an electrical impulse from a key brain region, a split-second grimace, or a moment’s hesitation as they ponder a question. The experts aim to divine voters’ intent from signals they’re not aware they’re producing. A candidate’s advisors can then attempt to use that biological data to influence voting decisions.
Political insiders say campaigns are buying into this prospect in increasing numbers, even if they’re reluctant to acknowledge it. “It’s rare that a campaign would admit to using neuromarketing techniques—though it’s quite likely the well-funded campaigns are,” says Roger Dooley, a consultant and author of Brainfluence: 100 Ways to Persuade and Convince Consumers with Neuromarketing. While it’s not certain the Trump or Clinton campaigns used neuromarketing in 2016, SCL—the parent firm of Cambridge Analytica, which worked for Trump—has reportedly used facial analysis to assess whether what voters said they felt about candidates was genuine.
But even if US campaigns won’t admit to using neuromarketing, “they should be interested in it, because politics is a blood sport,” says Dan Hill, an American expert in facial-expression coding who advised Mexican president Enrique Peña Nieto’s 2012 election campaign. Fred Davis, a Republican strategist whose clients have included George W. Bush, John McCain, and Elizabeth Dole, says that while uptake of these technologies is somewhat limited in the US, campaigns would use neuromarketing if they thought it would give them an edge. “There’s nothing more important to a politician than winning,” he says.
The trend raises a torrent of questions in the run-up to the 2018 midterms. How well can consultants like these use neurological data to target or sway voters? And if they are as good at it as they claim, can we trust that our political decisions are truly our own? Will democracy itself start to feel the squeeze?
Brain, eye, and face scans that tease out people’s true desires might seem dystopian. But they’re offshoots of a long-standing political tradition: hitting voters right in the feels. For more than a decade, campaigns have been scanning databases of consumer preferences—what music people listen to, what magazines they read—and, with the help of computer algorithms, using that information to target appeals to them. If an algorithm shows that middle-aged female SUV drivers are likely to vote Republican and care about education, chances are they’ll receive campaign messages crafted explicitly to push those buttons.
Biometric technologies raise the stakes further. Practitioners say they can tap into truths that voters are often unwilling or unable to express. Neuroconsultants love to cite psychologist Daniel Kahneman, winner of the Nobel Prize in economics, who distinguishes between “System 1” and “System 2” thinking. System 1 “operates automatically and quickly, with little or no effort and no sense of voluntary control,” he writes; System 2 involves conscious deliberation and takes longer.
“Before, everyone was focused on System 2,” explains Rafal Ohme, a Polish psychologist who says his firm, Neurohm, has advised political campaigns in Europe and the United States. For the past decade, Ohme has devoted most of his efforts to probing consumers’ and voters’ System 1 leanings, which he thinks is as important as listening to what they say. It’s been great for his business, he says, because his clients are impressed enough with the results to keep coming back for more.
Many neuroconsulting pioneers built their strategy around so-called “neuro-focus groups.” In these studies, involving anywhere from a dozen to a hundred people, technicians fit people’s scalps with EEG electrodes and then show them video footage of a candidate or campaign ad. As subjects watch, scalp sensors pick up electrical impulses that reveal, second by second, which areas of the brain are activated.
“One of the things we can analyze is the attentional process,” says Mexico City neurophysiologist Jaime Romano Micha, whose former firm, Neuropolitka, was one of the top providers of brain-based services to political campaigns. Romano Micha would place electrodes on a subject’s scalp to detect activity in the reticular formation, a part of the brain stem that tracks how engaged someone is. So if subjects are watching a political ad and activity in their reticular formation spikes, say, 15 seconds in, it means the message has truly caught their attention at that point.
“There’s nothing more important to a politician than winning.”
Other brain areas provide important clues too, Romano Micha says. Electrical activity on the left side of the cerebral cortex suggests people are working hard to understand a political message; similar activity on the right side may reveal the precise moment the message’s meaning clicks into place. With these kinds of insights, campaigns can refine a message to maximize its oomph: placing the most gripping moment at the beginning, for instance, or cutting the parts that cause people’s attention to wander.
But while brain imaging remains part of the neuropolitical universe, most neuroconsultants say it’s hardly sufficient by itself. “EEG gives us very general information about the decision process,” Romano Micha says. “Some people are saying that through EEG we can go into the mind of people, and I think that’s not possible yet.” There are cheaper and more reliable tools, several consultants claim, for getting at a voter’s true feelings and desires.
EEG scans, in fact, are now just one in a smorgasbord of biometric techniques. Romano Micha also uses near-infrared eye trackers and electrodes around the orbital bone to track “saccades,” minuscule movements of the eye that indicate viewers’ attentional focus as they watch a campaign spot. Other electrodes supply a rough gauge of arousal by measuring electrical activity on the surface of a person’s skin.
Of course, you can’t stick electrodes on every person watching TV and browsing Facebook. But you don’t need to. The results from experiments on small neuro-focus groups can be used to influence voters who aren’t being sampled themselves. If, for example, biodata reveals that liberal women over 50 are fearful when they see an ad about illegal immigration, campaigns that want to stoke such fear can broadcast that same message to millions of people with similar demographic and social profiles.
Pocovi’s approach at Emotion Research Lab requires only a video player and a front-facing webcam. When volunteers enroll in her political focus groups online, she sends them videos of an ad spot or a candidate that they can watch on their laptop or phone. As they digest the content, she tracks their eye movements and subtle shifts in their facial expressions.
“We have developed algorithms to read the microexpressions in the face and translate in real time the emotions people are feeling,” Pocovi says. “Many times, people tell you, ‘I’m worried about the economy.’ But what are really the things that move you? In my experience, it’s not the biggest things. It’s the small things that are close to you.” Something as small as a candidate’s inappropriately furrowed brow, she says, can color our perception without our realizing it.
Pocovi says her facial analysis software can detect and measure “six universal emotions, 101 secondary emotions, and eight moods,” all of which interest campaigns anxious to learn how people are responding to a message or a candidate. She also offers a crowd-analytics service to track the emotional reactions of individual faces in a human sea, meaning that campaigns can take the temperature of a room as their candidate is speaking.
ERL’s software is built around the facial action coding system (FACS) developed by Paul Ekman, a famed American psychologist. Pocovi’s algorithm deconstructs each facial image from the webcam into more than 50 “action units,” movements of specific muscle groups. Distinct clusters of action units correspond to particular emotions: cheek and outer-lip muscles contracting at the same time reveal happiness, while lowered brows and raised upper eyelids betray anger. Pocovi trains her system to recognize each one by showing it many reference images from a large database of faces expressing that emotion.
Some critics of Ekman’s system, such as neuroscientist Lisa Feldman Barrett, have argued that facial expressions don’t necessarily correlate with emotional states. Still, a variety of studies have shown at least some correspondence. In a 2014 study at Ohio State University, cognitive scientists defined 21 “distinct emotions,” based on the consistent ways most of us move our facial muscles.
Pocovi says her surveys also operate as an image-refining tool for candidates themselves. She analyzes video of candidates to pinpoint precise moments when their expressions make voters feel confused, disgusted, or angry. Politicians can then use this information to rehearse a different emotional approach, which can itself be vetted using Pocovi’s survey platform until it produces the desired response in viewers. In one campaign Pocovi advised, a candidate was recording a TV ad spot with an uplifting, positive message, but it kept getting terrible reviews in test screenings. The spot’s poor performance was a mystery—until Pocovi’s analysis of the candidate’s face showed he was unwittingly conveying anger and disgust. Once he realized what was going on, he was able to tweak his presentation and get a better response from the public.
Several onetime devotees of brain-scan analysis are also pursuing simpler and cheaper techniques these days. Before the 2008 financial crisis, Ohme says, international clients were more willing to fly five guys from Poland out to perform on-site brain studies. After the recession, though, that business mostly dried up.
That prompted Ohme to develop a different strategy, one untethered to time, space, or EEG electrodes. His updated approach stems from that used in unconscious-bias studies by social psychologist Anthony Greenwald, who became a mentor when Ohme visited the US on a Fulbright scholarship. Ohme says his smartphone-based test—which he calls iCode—reveals covert political leanings that would never surface in traditional questionnaires or focus groups.
When Ohme asked test subjects whether Hillary Clinton shared their values, they often hesitated for an unusually long time.
Ohme’s survey takers begin by answering calibration questions to assess their baseline reaction time. A habitually slower person, for instance, might have a “unit time” lasting 585 milliseconds, while someone quicker might take 387 milliseconds. Then images of politicians are shown on the screen, each paired with a single attribute, such as “trustworthy,” “well-known,” or “shares my values.” Users tap “yes” or “no” to indicate whether they agree with each pairing. As the test proceeds, the app tracks not just how they answer but how quickly they touch the screen and what tapping rhythm they establish.
What’s interesting, Ohme says, isn’t how people respond to the questions per se, but how much they dither first. “When we measure the hesitation level, we can see that some answers are positive but with hesitation, and some are positive and instantaneous,” he says. “We measure how much you deviated [from baseline]. This deviation is key.”
Ohme declines to discuss his current political clients in much detail, citing confidentiality agreements. But he volunteers that in an iCode survey of nearly 900 people, he predicted Hillary Clinton’s 2016 defeat before the election. Throughout the year, Clinton ran comfortably ahead of Trump in traditional polls. But when Ohme asked test subjects whether Clinton shared their values, they often hesitated for an unusually long time before responding that she did. Ohme knew a sense of shared values was a big factor motivating people to vote in 2016 (in previous elections “powerful” and “leader” were key), so the results of the test gave him serious doubts about a Clinton victory. He argues that if Clinton’s campaign had run one of his studies before the election, she would have understood the depth of her vulnerability and could have made course corrections.
Ohme claims to have helped other candidates in similar straits. One of his tests revealed that while a certain European client had a good-sized base of supporters, many weren’t motivated to get out and vote because they assumed their candidate would win. Armed with this knowledge, the campaign made a renewed push to get its loyal base to the polls. The client ended up winning in a squeaker.
Does measuring people’s spontaneous reactions to a TV ad or a stump speech tell you how they will ultimately vote, however? “On the applied side, it’s pretty unclear, the hype from the reality,” says Darren Schreiber, a professor in political science at the University of Exeter and author of Your Brain Is Built for Politics. “It’s easy to over-believe the ability of these tools.” So far cognitive tests have had mixed results. Contrasting studies have shown that implicit attitudes both do and don’t predict how people vote.
Still, Schreiber, who has conducted brain-scan tests of political attitudes, admits the technologies are worrisome. Democracy assumes the presence of rational actors, capable of digesting information from all quarters and coming to reasoned conclusions. If neuroconsultants are even half as good as they claim at probing people’s innermost thoughts and shifting their voting intentions, it calls that assumption into question.
“We are susceptible in multiple ways, and not aware of our susceptibility,” Schreiber says. “The fact that attitudes can be manipulated in ways we’re not aware of has a lot of implications for political discourse.” If campaigns are nudging voters toward their candidate without voters’ knowledge, political discussions that were once exchanges of reasoned views will become knee-jerk skirmishes veering ever further from the democratic ideal. “I don’t think it’s time to run in panic,” Schreiber says, “but I don’t think we can be sanguine about it.”
Ohme insists that voters can inoculate themselves against neuroconsultants’ tactics if they’re savvy enough. “I measure hesitation. I can change your mind only if you hesitate. If you are a firm believer, I cannot change anything,” he says. “If you’re scared to be manipulated, learn. The more you learn, the more firm and stable your attitudes are, and the more difficult it is for someone to convince you otherwise.”
That’s perfectly reasonable advice. But I wonder. After meeting Pocovi, I logged into Emotion Research Lab to let its software track my face while I watched a demo video. The video was of a laughing baby, and I felt the corners of my mouth quirking up. After, the computer asked me how I’d felt while watching. “Happy,” I clicked. I’m a mom, right? I love babies. Yet when my emotion analysis arrived, it showed almost no trace of happiness on my face.
Thinking about the results, I realized the emotion software was right. I hadn’t really been happy at all. I had taken the test late at night, and I had been exhausted. The computer had seen me in a way I wasn’t used to seeing myself. I thought of something Dan Hill, the former advisor to the Mexican president’s campaign, had told me. “The biggest lies in life,” he’d said, “are the ones we tell ourselves.”
https://www.technologyreview.com/s/6118 ... rs-brains/
The odd reality of life under China's all-seeing credit score system
It might sound like a futuristic dystopian nightmare but the system is already a reality. Social credit is preventing people from buying airline and train tickets, stopping social gatherings from happening, and blocking people from going on certain dating websites. Meanwhile, those viewed kindly are rewarded with discounted energy bills and similar perks.
China's social credit system was launched in 2014 and is supposed to be nationwide by 2020. As well as tracking and rating individuals, it also encompasses businesses and government officials. When it is complete, every Chinese citizen will have a searchable file of amalgamated data from public and private sources tracking their social credit. Currently, the system is still under development and authorities are trying to centralise local databases.
Read more: https://www.wired.co.uk/article/china-social-credit
Trump Says You Need an ID to Buy Groceries. Shoppers Say, ‘Huh?’
“You know,” Mr. Trump knowingly told the approving crowd, “if you go out and you want to buy groceries, you need a picture on a card. You need ID.”
It is not often that Mr. Trump, a mold-breaking billionaire, is seen as behaving presidentially, which he thinks would be too boring, anyway. But with this particular offhand and baldly inaccurate comment, the president landed himself in the company of other presidents and presidential hopefuls who have fumbled while trying to showcase their everyman appeal.
The obvious question immediately surfaced: Has Mr. Trump ever been in the checkout line at a grocery store?
https://www.nytimes.com/2018/08/01/us/p ... eries.html
China legalizes Xinjiang 're-education camps' after denying they exist
Authorities in China's far-western Xinjiang region appear to have officially legalized so-called re-education camps for people accused of religious extremism, a little more than a month after denying such centers exist.
The Xinjiang government on Tuesday revised a local law to encourage "vocational skill education training centers" to "carry out anti-extremist ideological education."
Human rights organizations have long alleged the Chinese government has been detaining hundreds of thousands of Uyghurs -- a Turkic-speaking, largely Muslim minority native to Xinjiang -- in such centers as part of an effort to enforce patriotism and loyalty to Beijing in the region.
In an August 29 report, the UN Committee on the Elimination of Racial Discrimination expressed alarm at reports of Uyghurs and other Muslims being held for long periods of time without charge or trial "under the pretext of countering terrorism and religious extremism."
US Vice President Mike Pence made a similar accusation in a speech last week at the Hudson Institute.
"Survivors of the camps have described their experiences as a deliberate attempt by Beijing to strangle Uyghur culture and stamp out the Muslim faith," Pence said.
The Chinese government has forcefully maintained the reports aren't true and there is "no arbitrary detention or lack of freedom of religion or belief."
"Xinjiang citizens including the Uyghurs enjoy equal freedoms and rights," Hu Lianhe, a spokesman for China's United Front Work Department, told the UN panel.
In the revised Xinjiang law, Article 33 stipulates that "institutions such as vocational skill education training centers should carry out trainings on the common national language, laws and regulations, and vocational skills, and carry out anti-extremist ideological education, and psychological and behavioral correction to promote thought transformation of trainees, and help them return to the society and family."
The updated law all but acknowledges the growing reports of mass detentions inside Xinjiang, where former detainees say they were forced to yell patriotic slogans, sing revolutionary songs and study Chinese President Xi Jinping's teachings.
Maya Wang, senior China researcher at Human Rights Watch, said Xinjiang's regional government did not have the authority under China's constitution to legalize the detentions.
"Without due process, Xinjiang's political education centers remain arbitrary and abusive, and no tweaks in national or regional rules can change that," Wang said.
In the past year, Beijing has radically attempted to tighten its hold over the remote region following a spate of violent attacks that the government blamed on Uyghur Muslim separatists trying to establish an independent state.
In a submission to the United Nations, the Germany-based World Uyghur Congress estimated at least 1 million Uyghurs were being held in political indoctrination camps as of July.
"Detentions are extra-legal, with no legal representation allowed throughout the process of arrest and incarceration," the submission said.
Tuesday's announcement came a day after local leaders in Urumqi, capital of Xinjiang, announced the beginning of an anti-halal campaign.
Under the new rules, all officials and police in the region must make a declaration that they are "loyal Communist Party members" and "don't have any religious belief." Their only faith is allowed to be
"Marxism and Leninism," and they must agree to "fight against 'pan-halalization' thoroughly," the new oath said.
It isn't the first time China has cracked down on elements of the Muslim faith in Xinjiang. In 2017, authorities first banned a wide range of activities, including wearing face coverings and having a long beard.
https://www.cnn.com/2018/10/10/asia/xin ... index.html
Most White Americans’ DNA Can Be Identified Through Genealogy Databases
The genetic genealogy industry is booming. In recent years, more than 15 million people have offered up their DNA — a cheek swab, some saliva in a test-tube — to services such as 23andMe and Ancestry.com in pursuit of answers about their heritage. In exchange for a genetic fingerprint, individuals may find a birth parent, long-lost cousins, perhaps even a link to Oprah or Alexander the Great.
But as these registries of genetic identity grow, it’s becoming harder for individuals to retain any anonymity. Already, 60 percent of Americans of Northern European descent — the primary group using these sites — can be identified through such databases whether or not they’ve joined one themselves, according to a study published today in the journal Science.
Within two or three years, 90 percent of Americans of European descent will be identifiable from their DNA, researchers found. The science-fiction future, in which everyone is known whether or not they want to be, is nigh.
“It’s not the distant future, it’s the near future,” said Yaniv Erlich, the lead author of the study. Dr. Erlich, formerly a genetic-privacy researcher at Columbia University, is the chief science officer of MyHeritage, a genetic ancestry website.
Read more: https://www.nytimes.com/2018/10/11/scie ... study.html
New Zealand’s ‘digital strip searches’: Give border agents your passwords or risk a $5,000 fine
Travelers who refuse to surrender passwords, codes, encryption keys and other information enabling access to electronic devices could be fined up to $5,000 in New Zealand (about US$3,300), according to new customs rules that went into effect Monday.
Border agents were already able to seize digital equipment, but the Customs and Excise Act of 2018 newly specifies that access to personal technology must be handed over as well. The law provides, however, that officials need to have “reasonable cause to suspect wrongdoing” before conducting a digital search — cold comfort for civil liberties advocates, who have sounded an alarm about the measure.
Read more: https://www.washingtonpost.com/news/mor ... a2a17a1446
Do We Need To Teach Ethics And Empathy To Data Scientists?
The growing shift away from ethics and empathy in the creation of our digital future is both profoundly frightening for the Orwellian world it is ushering in, but also a sad commentary on the academic world that trains the data scientists and programmers that are shifting the online world away from privacy. How might the web change if we taught ethics and empathy as primary components of computer science curriculums?
One of the most frightening aspects of the modern web is the speed at which it has struck down decades of legislation and professional norms regarding personal privacy and the ethics of turning ordinary citizens into laboratory rats to be experimented on against their wills. In the space of just two decades the online world has weaponized personalization and data brokering, stripped away the last vestiges of privacy, centralized control over the world’s information and communications channels, changed the public’s understanding of the right over their digital selves and profoundly reshaped how the scholarly world views research ethics, informed consent and the right to opt out of being turned into a digital guinea pig.
Read more: https://www.forbes.com/sites/kalevleeta ... 6add6b12ee
Scientists Fear DARPA's 'Insect Allies' Will Attack Global Food Supply with Viruses
The US has been investing in genetic technology to help save its crops, but scientists fear that same technology could be unleashed on our enemies.
In 2016, the US Department of Defense launched the “Insect Allies” program, which involves investing $45 million over four years to prevent crop failure driven by climate change and pathogens by using insects to deliver a genetically engineered virus that will improve crop growth by altering which genes the plants express.
Read more: https://motherboard.vice.com/en_us/arti ... th-viruses
Essays reveal Stephen Hawking predicted race of 'superhumans'
Physicist said genetic editing may create species that could destroy rest of humanity
The late physicist and author Prof Stephen Hawking has caused controversy by suggesting a new race of superhumans could develop from wealthy people choosing to edit their and their children’s DNA.
Hawking, the author of A Brief History of Time, who died in March, made the predictions in a collection of articles and essays.
The scientist presented the possibility that genetic engineering could create a new species of superhuman that could destroy the rest of humanity. The essays, published in the Sunday Times, were written in preparation for a book that will be published on Tuesday.
“I am sure that during this century, people will discover how to modify both intelligence and instincts such as aggression,” he wrote.
“Laws will probably be passed against genetic engineering with humans. But some people won’t be able to resist the temptation to improve human characteristics, such as memory, resistance to disease and length of life.”
Read More: https://www.theguardian.com/science/201 ... ays-reveal
DARPA Is Working to Create Cyborgs and Meta-Humans – Even if They Won't Admit It
If you thought the announcement of the first air-to-air drone kill or the unveiling of Russia's new bipedal military mech was cause for some Fallout-style meditations on how war is changing, rest assured that DARPA is developing new technologies that rival anything military science fiction (or cyberpunk dystopias) could imagine. You might be thinking of the bipedal robot Atlas (who's been showing off his growing parkour skills), but the reality is that neurotechnology is going to be the next big thing.
According to a new article published by The Atlantic, DARPA's neurotechnology initiatives are ostensibly aimed at helping soldiers recover from debilitating injuries, such paralysis or loss of limbs. A big PR boost came from a highly publicized video of Jan Scheuermann, the owner of a DARPA-developed cybernetic arm, feeding herself chocolate by controlling her arm with only her thoughts. As Justin Sanchez, Director of DARPA's Biological Technologies Office, put it: "The people that we are trying to help should never be imprisoned by their bodies. And today we can design technologies that can help liberate them from that."
But according to an interview with at least one former DARPA employee, the organization is looking to "free the mind from the limitations of even healthy bodies."
This ideas permeates everything DARPA is doing, as exemplified in the writer's talk with Geoff Ling, who works in the Biological Technologies Office. "If a brain can control a robot that looks like a hand, why can't it control a robot that looks like a snake?" asked Ling. "Why can't that brain control a robot that looks like a big mass of Jell-O, able to get around corners and up and down and through things...in my world, with their brain now having a direct interface with that glob, that glob is the embodiment of them. So now they're basically the glob, and they can go do everything a glob can do."
Apart from Jell-O globs, DARPA has plans to start augmenting the human brain and nervous system so that it can learn faster and perform beyond the capabilities of vanilla humans, as well as create programs to re-write soldiers' brains or effectively transfer memories into their minds. The goal seems to be pushing the boundaries of the human body, and the definition of human.
https://www.outerplaces.com/science/ite ... technology
Guided by CRISPR, prenatal gene editing shows proof-of-concept in treating disease before birth
Date: October 8, 2018
Source: Children's Hospital of Philadelphia
Summary: For the first time, scientists have performed prenatal gene editing to prevent a lethal metabolic disorder in laboratory animals, offering the potential to treat human congenital diseases before birth. Researchers offer proof-of-concept for prenatal use of a sophisticated, low-toxicity tool that efficiently edits DNA building blocks in disease-causing genes.
https://www.sciencedaily.com/releases/2 ... 183402.htm
Human 2.0 Is Coming Faster Than You Think. Will You Evolve With The Times?
“Our technology, our machines, is part of our humanity,” author, computer scientist, and inventor Ray Kurzweil once said. “We created them to extend ourselves, and that is what is unique about human beings.” In the past few years, there has been considerable discussion around the idea we are slowly merging with our technology, that we are becoming transhuman, with updated abilities, including enhanced intelligence, strength, and awareness.
Read more: https://www.forbes.com/sites/cognitivew ... 3ebb7c4284
Researchers Just Turned On the World's Most Powerful Computer Designed to Mimic a Human Brain
Neuromorphic computing just got a big boost with a million-core supercomputer that took over a decade to build.
Using computers to mimic the brain, also known as neuromorphic computing is a rapidly growing area of computer science research that focuses on developing system architectures and specialized computer chips that replicate the way the human brain processes information. Not only will this allow neuroscientists to create unprecedented models of the brain, but it will also allow roboticists to create robots that can navigate complex environments using computer vision.
https://motherboard.vice.com/en_us/arti ... uman-brain
Nature pushed to the brink by 'runaway consumption'
From 1970 to 2014, 60 percent of all animals with a backbone—fish, birds, amphibians, reptiles and mammals—were wiped out by human activity, according to WWF's "Living Planet" report, based on an ongoing survey of more than 4,000 species spread over 16,700 populations scattered across the globe.
Read more at: https://phys.org/news/2018-10-nature-br ... n.html#jCp
Scientists Warn That World’s Wilderness Areas Are Disappearing
“Wild areas provide a lot of life support systems for the planet,” said the author of a study that found 77 percent of earth’s land had been modified by humans.
Scientists are warning that if human beings continue to mine the world’s wildernesses for resources and convert them into cities and farms at the pace of the previous century, the planet’s few remaining wild places could disappear in decades.
Today, more than 77 percent of land on earth, excluding Antarctica, has been modified by human industry, according to a study published Wednesday in the journal Nature, up from just 15 percent a century ago.
The study, led by researchers from the University of Queensland in Australia and the Wildlife Conservation Society in New York, paints the first global picture of the threat to the world’s remaining wildernesses — and the image is bleak.
“We’re on a threshold where whole systems could collapse and the consequences of that would be catastrophic,” said James R. Allan, one of the study’s authors.
https://www.nytimes.com/2018/10/31/worl ... -gone.html
Once again, that is not up to you to decide and […]
Dickerson asked Kennedy about Whitaker's past sta[…]
These landlord fuckers exploit the shit out of th[…]
So, we have evidence that migrants often voted in […]