Tag Archives: technology

Tech trends for 2015

Interested in what’s next or simply trying to keep up with impressive-sounding jargon? I’ve compiled a few lists of tech trends that are expected to be hot this year. I agree with some of them, for better or worse, and others are giving me food for thought as I consider the human implications. Click the links for more details.

Webbmedia Group (the presentation below is worth watching at full-screen, but a summary list of the key points from by Amy Webb in the Harvard Business Review  includes deep learning, smart virtual personal assistants, “It’s like Uber for ____”, oversight for algorithms, data privacy, and block chain technology):


10 Strategic Technology Trends from Gartner:

  1. Computing everywhere
  2. The Internet of Things (IoT)
  3. 3D printing
  4. Advanced, pervasive, invisible analytics
  5. Context-rich systems
  6. Smart machines
  7. Cloud/client architecture
  8. Software-defined infrastructure and applications
  9. Web-scale IT
  10. Risk-based security and self-protection

Tech Trends for 2015 from frog design:

  • Move over “step counters”
  • Ambient intelligence knows what’s up
  • Nano particles diagnose from the inside out
  • The emergence of the casual programmer
  • Eat your technologies
  • The Internet of food goes online
  • Mobilizing the next 4 billion
  • Personal darknets in the spotlight
  • 4D printing assembles itself
  • Digital currency replaces legal tender
  • The rise of cognitive behavioral therapy
  • Textiles get techy
  • Adaptive education personalizes learning
  • Achievement unlocked: you’re hired!
  • Micro-farming networks go mainstream

 The Tech That Will Dominate 2015, from Tim Bajarin at PC Magazine:

  • Apple enhances product resolution and invades enterprise market
  • Increased vigilance against security breaches
  • Tablets as personal TVs
  • Streaming media everywhere
  • Better battery life
  • New MacBook Air
  • Domestic robots
  • Low-end tablets replacing other gadgets
  • Apple Watch more successful that expected
  • Easier ways to design/create 3D products
Leave a comment

Posted by on January 7, 2015 in Digital Devices, In the News, Research


Tags: , , , ,

Random roundup for Monday

I’m fighting a cold today. Blah! Dayquil isn’t helping with deep thoughts, so I’ll offer a trio of shallow ones:

  • Earlier this month, WIRED had an essay about wearables that could have come straight from my brain: Wearables Are Totally Failing the People Who Need Them Most. The author points out that developers are going after the fitness market or trendy applications rather than creating products for people with chronic illnesses who need tracking and reminder methods. I mentioned a couple ideas related to that here.
  • On my Christmas wish list is the book Hacker, Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous by Gabriella Coleman. Coleman is an anthropologist who began studying Anonymous in 2008. I’ve read reviews on Slate and The Nation and I’m curious about her methodology and her perspective. Whenever I browse through 4chan or read about actions of some Anons, I find my anthropologist cap slipping into place as I try to see both the details and the bigger picture. I’m hoping her book is well done, thoughtful, and balanced.
  • My husband spotted a Linden in the wild over the weekend. For people unacquainted with Second Life, this means that an employee of the company that runs SL was spotted in-world. Other people I know report seeing them more often, but neither of us have stumbled across a Linden online — outside of a meeting/conference or taking care of a service ticket for me — in over 8 years. It was strangely exciting. Photos were taken.
Leave a comment

Posted by on November 17, 2014 in Side Topics


Tags: , , , , ,

What would your life be like without digital technology?

A couple nights ago, The Colbert Report interview segment featured yet another self-important philosopher-type. When asked to critique modern culture in ten words or less, he replied, “Too much digital. Not enough critical thinking. More physical reality.”

Now, I agree with him about critical thinking, though it seems to contradict the rest of his statement. My opinion is that people who are active in both the physical and digital worlds can consciously choose to experience the best of both, based on their preferences.  There are outliers on both sides and their choices are valid too, whether they choose full immersion in the physical or digital realm (as much as that is possible), but their choices are in no way superior.

What experiences shape the opinions of digital naysayers? Did their teenagers text through an important family moment? Was AOL just a little too complicated? Do they see ads for laptops and tablets and think that they’re babble in a language they don’t understand?  Was a moment of forest reverie interrupted by a hiker’s Maroon 5 ringtone?

If I imagine my life without the digital components, it is not a richer existence.  My husband and I got to know each other online for years before we met in person, so I wouldn’t have my comfy home life. I wouldn’t have any of the relationships I’ve built in virtual worlds and games, whether fleeting or enduring.  The most lucrative positions in my job history and my financial stability now are based in digital technology. Before that, I was managing office buildings, which can be a difficult, miserable job.

Without digital technology, my experience of the world would be limited to what others tell me. I could read my local newspaper and the selection of books at the library and local bookstores, chosen by others. I could watch local television news and the national news. How limiting that would be! Now I have the work of thinkers around the globe, ancient and modern, at my fingertips. I can see the news through a plethora of filters. Plus, the income from those tech jobs allowed me to travel widely and to go back to college to study anthropology.

Speaking of anthropology, digital technology helps me understand and relate to other people. I’m not inherently good at that and if I had to meet everyone face-to-face, I’d become a hermit. I’m not innately compassionate either, yet Facebook and email allow me to express concern and support without my reserve being misunderstood. Because I can have a lot of interactions online, I have social energy when I need it. That allows me to take yoga classes and make small talk with strangers when I walk my dog through the neighborhood. Those can be excruciating or impossible when I am socially exhausted. Technology is thereby a contributor to my physical well-being, too.

My experiences with the physical world are entwined with the digital.  Once I finish writing this post, I will clean and process a fantastic seven pound mushroom harvested from my yard: I know it is edible thanks to online mushroom guides, I watched a YouTube video of how to clean it, and I found a recipe for wild mushroom soup on a blog. Those digital elements don’t detract from gathering and eating the most local of food; they enable it.  We’ve gone camping in tents, kayaking, and on long bike rides this summer, and all of those deeply physical experiences were connected in some way to the digital — making campground reservations online, finding where we took a wrong turn off the bike paths using Google Maps, checking the weather radar to see if those dark clouds heading toward us were full of rain and we should paddle our arms off to race to the beach.

Sure, some of us act like selfish narcissists and those traits can be more obvious when technology is involved. Some of us get dazzled or obsessed with something for a while, and aspects of digital tech appeal to our compulsive inclinations. But for many, digital technology is an integrated and balanced part of our lives. I might be an extreme example but I’m certainly not unique.

Leave a comment

Posted by on October 9, 2014 in Culture, Offline impact


Tags: , , , ,

Next weekend: Int’l Disability Rights Affirmation Conference in SL


Everyone is welcome to attend this conference, sponsored and hosted by Virtual Ability next Friday and Saturday.  Presenters — including anthropologist Tom Boellstorff — will speak about topics from web accessibility (interface design and rights issues) to assistive technology and inclusion. The full schedule can be found here. Times are posted in SLT, which is Pacific Daylight Time. I’m hoping to attend one or two sessions.

These are important topics for anyone working in web or technology design, far beyond SL. If you’re not a current Second Life resident and want to attend, it’s quite easy to create a new free account, choose one of the pre-made avatars, and attend. Be sure to get set up and download the viewer software well ahead of time. You can drop me a note if you need assistance.

Leave a comment

Posted by on September 27, 2014 in Health - Mental & Physical


Tags: , , ,

The robotic future: “Humans Need Not Apply”

The video below from CGP Grey is more pessimistic than my opinion and overstates some numbers, but it is a broad, persuasive look at the current and near future of robots in every workplace. It’s fifteen minutes long and I recommend watching the entire thing if you have time.


I’ve been doing a lot of thinking about the economic options for humans displaced by humans. There will be more jobs in programming and robot service, sure, but what will the rest of us do? Some people insist that this scenario makes the case for a guaranteed minimum income for everyone. Hmm. Perhaps. That would require a significant overhaul of our economic priorities and increased corporate taxation, which hasn’t been popular in my country. I’m not an economist and I prefer to avoid angry political arguments, so I’ll back away from that one and see what the experts come up with. However, I expect this to be an issue in my lifetime.

Leave a comment

Posted by on August 15, 2014 in Our Robot Overlords, Video


Tags: , ,

In the golden days before 2025, when the robots took our jobs

As a percentage of posts, I write about robots a lot. I firmly believe that we’re near the tipping point where robots will become mainstream; not just in industry or action movies, but in our offices and homes and on our roads. Perhaps we’ve learned from previous technological revolutions or we’re more wary of economic impact now, but I’m glad to see that bigger brains than mine are thinking about the human side of this change.

Illustration by Andrew Rae; Photograph: Bettmann/Corbis; originally from BloombergBusinessweek

Illustration by Andrew Rae; Photograph: Bettmann/Corbis; originally from BloombergBusinessweek


Pew Research (with partners) asked almost 1900 experts whether AI and robotic technology will lead to a net increase or decrease in human jobs by 2025. This is a significant concern because with our current problems of unemployment and income inequality — and in my country at least, a strong and sometimes self-defeating culture of individualism — losing jobs across the spectrum but especially in the middle class could be devastating. They summarized their findings:

Key themes: reasons to be hopeful

  1. Advances in technology may displace certain types of work, but historically they have been a net creator of jobs.
  2. We will adapt to these changes by inventing entirely new types of work, and by taking advantage of uniquely human capabilities.
  3. Technology will free us from day-to-day drudgery, and allow us to define our relationship with “work” in a more positive and socially beneficial way.
  4. Ultimately, we as a society control our own destiny through the choices we make.

Key themes: reasons to be concerned

  1. Impacts from automation have thus far impacted mostly blue-collar employment; the coming wave of innovation threatens to upend white-collar work as well.
  2. Certain highly-skilled workers will succeed wildly in this new environment—but far more may be displaced into lower paying service industry jobs at best, or permanent unemployment at worst.
  3. Our educational system is not adequately preparing us for work of the future, and our political and economic institutions are poorly equipped to handle these hard choices.

Pew has a page with specific quotes pulled from experts addressing each of those points. Most of them would make fine essay prompts and I agree with some among the optimists and pessimists. I think this change is going to seem very sudden and we won’t be prepared.  Automation and technical development has been growing at an increasing rate for decades. For example, my car is ten years old.  The experience of driving it is much closer to driving a 1970s sedan than driving my parents’ 2014 Toyota with Bluetooth, GPS, backup camera, parking assist, and more. Buyers expect these features without considering how recently they didn’t exist.

Some of the experts point to the loss of jobs that could occur in the professional sector as a differentiating factor of the robot/AI revolution; others point to the creation of new types of jobs to work with, create, program, and maintain the machines. I think we’ll go through a period of friction as automation removes some minor tasks but adds others, without actually replacing workers. Because of my interest in this topic, I tend to ask people how technology is changing their workplace and I’ve had many opportunities to talk about this with medical professionals. Right now, the sense I get is that they’re feeling a lot of strain. Things like laptops and tablets remove a transcription step and save record space, but I always hear complaints about network speed and screen response time. Some workers are sensitive to seeming inattentive as they type my responses into a device, offering lots of unnecessary apologies. A nurse giving me a pre-surgery stress test said that new electronic record keeping requirements came with new technology but little training. Instead of saving time, she found she had to spend almost as long doing data entry as completing a test, but her workload hadn’t decreased. Some of the nurses covered for others who were less proficient with the machines, which meant that they were turning into computer workers instead of the hands-on helping professionals they had trained to be. At today’s stage of automation, some simple tasks that used to be done by receptionists or file clerks have been removed but others have been shifted onto healthcare workers with unrelated specialized training. Patients get benefits from automated procedures and electronic records, but some low level workers lose their jobs and the middle level is overburdened with tasks that take time away from what they are trained to do.

That said, I am not a “new Luddite”. I think there needs to be a lot of thoughtful dialogue about adjusting to the future that puts people first — not gee-whiz technology, not corporations, not governments. There will be jobs for curious, self-motivated people who are willing and able to adapt their existing skills or train in new ones, but as a society, we also have to consider those who can’t or won’t. In the meantime, it’s heartening to see projects that attempt to extend human capabilities using robotics, like these power-lifting exoskelton prototypes for South Korean shipyard workers.

Leave a comment

Posted by on August 7, 2014 in Our Robot Overlords


Tags: , ,

Does it matter if movies feature blatantly incorrect science?

The movie Lucy opened in the US last week and it has simultaneously thrilled and enraged the geek subculture of which I consider myself a member. If you somehow haven’t seen the trailer, here you go:

See the dilemma? On the one hand, we get a superhero-type action movie with a female protagonist, directed by Luc Besson.  Win win win! (Though she’s still a sacrificial pawn in a male-dominated system, in a movie that is almost devoid of women, so maybe we’ll scratch off one of those wins.)  On the other hand, the plot emphatically repeats the myth that only about 10% of the average human brain is in use. There is plenty of hand-wringing about the lack of general scientific literacy. So, does it matter when big films present scientific errors as reality?


  • Only people who know the correct science will be annoyed by it and it won’t sway their understanding.
  • It’s fiction. We disregard pseudo-scientific plot devices all the time, from radioactive spider bites to zombifying viruses.
  • It’s harmless. If the misrepresented science was important to the audience members, they would know the truth. If they don’t, it doesn’t matter in their lives.


  • It’s offensive to actively perpetuate a myth about how we understand ourselves or the world.
  • It’s poor craftsmanship. Is it that hard for a screenwriter to Google “percentage of human brain used“?
  • OMG so stupid!!1!! <insert appropriate Internet rage here>

In the particular case of Lucy, which I saw yesterday:

  • I think this movie got slammed in some reviews because it perpetuates a known scientific myth, rather than using fantasy to extend beyond our scientific knowledge in an impossible way. It’s something we know is wrong, not just something that is exceedingly unlikely. That’s distracting, mildly infuriating, and disappointing.
  • This isn’t an absurd-but-forgivable device like gamma radiation exposure, quickly submersed in action and plot. It’s given credibility by having an “expert” lecture about it in an academic hall and after that, like an inverted countdown, is a continuous presence throughout the film. Lucy isn’t the only film to use this myth, but it declares it so loudly and aggressively that it’s hard to ignore.
  • It was unnecessary. A better effect could have been achieved with a little technobabble and hand waving. Say that the drug is causing Lucy’s brain to build faster, more extensive connections and her neural network is developing superhuman sensitivities. Is that so hard? Let Morgan Freeman lecture about instances of neural deficit and excess based on medical truth rather then give a portrayal of one of the worst scientists ever. As we walked to the car, my husband made an excellent suggestion: simply cutting out everything Morgan Freeman says in the first half of the movie would improve it instantly. His monologues make a mockery of the scientific method and what we confidently understand right now.
  • Why does the rationale need to seem scientifically valid, anyway? I suspended disbelief just fine in The Fifth Element, The Professional, La Femme Nikita — Besson movies I really enjoy — because the ludicrous concepts weren’t shoved down my throat. He can do so much better. Parts of this movie were gorgeous and the action was good fun, but trying to anchor it in science was a mistake.


On a related note, let me give an example of pseudo-mathematics done well. The series Silicon Valley‘s first season was built around the development of an amazing digital compression algorithm. The writers consulted with a Stanford professor and one of his graduate students, who came up with a convincing technical explanation of an impossible algorithm. It was good enough that even the techies in the audience could tilt their heads and say, “Hmm, that won’t actually work, but neat idea.”  In the show, the main character’s revelation of how to structure this algorithm was inspired by a hilarious and remarkably accurate scene in which his team calculates the mean time to accomplish a particular absurd action. It’s much like an NSFW version of a What If? entry, but I swear I’ve been in many meetings that devolved this way. You can watch the scene I’m talking about on YouTube here, with a caution about language and whiteboard penis drawings.

Leave a comment

Posted by on July 28, 2014 in Side Topics, Video


Tags: , , , , ,

%d bloggers like this: