Thinking about… the changes in the computer enthusiasts markets
In @@, Visualizing the Future... on Fri, 02 Apr 2010 02:12:31 -0700 at 02:12 PDT
When I think about networking it brings up ideas that correlate messaging technologies with person to person interaction. Messenger, Message, Trust, Time.
When I think about networking it brings up ideas that correlate messaging technologies with person to person interaction. Messenger, Message, Trust, Time.
A long, long time ago (90’s to 1H 00’), far far away (the 80’s) growing up as a geek was filled with the ridicule and mocking as the films over the years have shown(mostly). –SawyerIII @ Asstd.
I have worked with computer, electronics, and technology for the last twenty-five years(directly and/or indirectly). Additionally, I have experience with a large sampling rate (20-200 avg. people/day) of the general population of the large metropolis of San Antonio, TX for the last 10 years. The people who I have had increased contact with for the last 5 years have been those who consider themselves (whether they knew the definition or not) to be technology enthusiasts, gamers, users, novices, and haters.
Don’t shoot/kill the messenger!
“I resent the idea that people would blame the messenger for the message, rather than looking at the content of the message itself.” –Anita Hill
“Don’t blame the messenger because the message is unpleasant.” –Kenneth Starr
As time passed I have delved deeper into technology and peoples interaction with it, and have realized that people have a lot of distinct, time sensitive points of knowledge that when taken in total make up their entire body of knowledge about any subject. The problem arises when they have no personal context upon which to filter good vs. bad information at that time. I find (through observation) that people tend to rank information from good to bad based on the level of trust that they have with the originator of the message. This has historical basis and I think most people would agree with this, but I think we can agree that people have done well at transitioning from separating the messenger from the originator of the message.
I heard it through the grapevine.
There are things that we need to know about the message, other that its contents, these I classified as referential, instantaneous, & long form messages.
Hopefully when you receive this message you mentally duplicate the event into two categories, contextual and non-contextual. As these messages are recalled by you, person A, you can then try to separate message from the message and log them by time. Remember it is very difficult to separate the messenger from the message on the messengers side, but you can try to do better once you have the message.
I don’t trust it.
People have an very hard time acting as a messenger if they understand the message because there is a natural inclination to interpret the message to add it to their own knowledge cloud(unless you have an eidetic memory) this means it is almost impossible to pass that message to someone else UNFILTERED. This is because when you, person A, receive a message created by person B @ T=0, most people (both A & B) will filter the message through their own individual knowledge cloud, which may change the truth of the message. This is similar to mistranslation between languages, so the process of passing the message changes the meaning.
How old is the message?
Now that the mound of dead messengers is no longer filling up the entrance to our homes, then what about the message itself, what information can we get from it that is independent from the message itself. If there is 2-way communication between people then there is a lot of information that can be properly filtered and refined for clarity, but with single messages then the only metric you can use is time. Time is another area in which people fail to keeping their knowledge cloud updated. A current example of this is the Vista OS, if you ask people whether Vista is good or bad, then the answer they return is directly from their knowledge cloud, if you then ask them how they came to that conclusion then you get back either a personal or a referential response, both of witch are time dependent. If the time concentration of their Vista knowledge is only focused around Vista’s launch (2006-2008), then the answer you usually get back is that it’s bad, however if you ask someone with a time focus on VistaSP2+ (2009+) then the response you get back is usually good, and those whose time focus encompasses both times then you will probably get “”It was bad, but now it’s good”. This tells us that talking to someone that keeps their knowledge up to date is far more important than someone with a large knowledge cloud. Additionally those people who have a large, up to date knowledge cloud AND can intelligently filter their knowledge cloud, then the are the best people to talk to about a subject.
What does this mean?
So with the following summary…
we see that time is important to all of these categories. So getting the “truth” is not easy, but the most important thing we can do is remember that all knowledge in our knowledge clouds is time sensitive.
What does this have to do with computer enthusiasts?
First lets define some classifications (as of 1H10)…
I classified myself as a “gamer” until the mid 00’, because only the highest machines performed at the level I wanted them to. Nowadays I think that the gamer group has “jumped the shark” or “gone off the rails”. While I agree that ultimate gaming realism needs a certain quality output, lets consider what we have as possible output and input mechanisms…
In the 90’s it was all about ram and storage, but this stopped critical in terms of capacity (not speed or efficiency) back in 1H00’s. Since 2H00’s the capacity question has become a bit of a boondoggle, most people have more than they need. “But my computer is so slow…”, yes, but adding more ram or a BIGGER (not faster) drive is probably not the best hardware investment which means that capacity is not the issue for most people.
People perceive performance based on what they see(or hear, touch, …) on the screen(speakers, ‘power glove 2?’), so as long as the technology responds at least as fast as the sense can operate at (~5-120Hz frames per second, 20/20-12 @ 6.25Hz(.16s) visual acuity, 10Hz(.1s) force feedback, auditory frequency 20-20kHz @ 5hz, etc) then it is working, and if slower then there is a point that it becomes unpleasant for each type of output which means…
Since the 2H00’s the easiest way to improve performance, if you could afford it, was to use iterations of the same things we have been doing for years…
The biggest difference now is that SSD’s have become as price competitive per gig as the hdd’s from 04’ vs ssd prices now. The interesting thing is that hdd’s took ~3 years to double their $/gb while ssd’s have only took ~1 year. This is the big lesson, as we remove more and more of the mechanical solutions, we start approaching a faster generational turn over from 3 years to 1.5 years. With this information, I suspect that the early majority will start accepting ssd’s as the common primary drive on Intel’s G3 release, woe to the OEM’s (notebook, motherboard, and case) who don’t design for at least 2 types of drives (either x2 2.5” bays, or x1 2.5” and 1x pci-e 1x).
So what should the gamers be interested in now (after they get a good ssd), why different monitors and interfaces, this is the path with growth potential. I can’t wait to see commercial examples of the tech I saw at CEDIA 09, here in San Antonio, TX.
What is going to happen next?
Just like every major change in a technology, the few people that keep up-to-date will change their focus to the next major advancement. For those who have not kept up, I have a prophetic tidbit for you…
One of Intel’s next chipset rev.’s (I think it will be the 6X series) will incorporate their newest tech, lightpeak. This tech, for those that don’t know, is a low cost (<$10) high speed (10+Gbps) long length (~100m) optical cable that should replace all other copper cables. One of the most important parts of this design is that they are building bridging technology right into the hardware, so finding add-on devices to convert to/from lightpeak from/to USB, Firewire, SATA, iSCSI, HDMI, DisplayPort, Ethernet, etc. shall be easy to design and manufacture while we transition to integrating it as the standard optical connection of the future. Did I mention that Intel (part of the USB Implementers Forum), designed an optional optical connection as part of draft usb 3.0 “optical future-proofing” specifications, which was later dropped from the specification (maybe) for these reasons…
“Still and all, why bother? Here’s my answer. Many people need desperately to receive this message: I feel and think much as you do, care about many of the things you care about, although most people do not care about them. You are not alone.” –Kurt Vonnegut
Related Articles
- LinkedIn joining Hotmail and Messenger (seattletimes.nwsource.com)
- Time to reveal cloud’s silver upside (enterpriseirregulars.com)
- Messenger for Google Voice – David (itunes.apple.com)
- Putting The Brakes On Public Cloud Adoption (forbes.com)