Dovallis Martan JenusKoll
Osmon Surveillance Caldari State
786
|
Posted - 2014.05.10 12:40:00 -
[1] - Quote
Specialized consoles are far faster than "Jack of all trades" Computers. As processing speeds increase and specialized functions start becoming more potent than generic functions, I would not be surprised to see consoles excel in almost every field.
Currently the only benefit a PC has is it's Graphics card. Which is made for images on a flat monitor... So, what is going to happen in the future when 3D systems start cropping up? Computer graphics processors WILL hit a divide, with one set being faster at flat screen presentation, and another being faster at dual perspective presentation... etc etc... Same thing is happening with input methods. Currently the PC has the most precise input methods because of the mouse, but consoles are starting to gain gesture recognition. Pointing at the screen may become faster in the long run for adjustments than ol' clicky over there.
That's 2 regions that development is diverging. Where will a PC stand at the end of this? One of the largest selling points for computers, that keeps the desktop variant prices low, are businesses! If businesses keep the flat screens as a "Professional" look, but home systems start to verge with 3-D venues, you can bet that the Price of home computers may rise, but consoles may stay at about the same ratio, because they have to custom order their components anyway.
Even now, you can begin to see the difference between specialized performance and "Jack of All" but only at startup. For example, I have a game on a console, and I have a game on the PC. I turn both systems on at the same time, a few seconds later I've got my game for the Console, but the PC is still loading all the background hurdles... Then, about a minute later, the game starts up.
http://youtu.be/dtXupQg77SU Dust to Dust, theme
|
Dovallis Martan JenusKoll
Osmon Surveillance Caldari State
789
|
Posted - 2014.05.10 15:41:00 -
[2] - Quote
Maitue Mae wrote:
What? Using the computer platform isn't using a jack of all trades. You will be able to custimize EVERYTHING (That means Jack of All Trades btw) on your computer to whatever you want it to be. There is NO limitation. Well, there is. It's your intelligence.
Your argument is so poorly based in the now, that you are unable to use any foresight, aren't you?
Reminds me of an argument regarding airplanes and trains, when airplanes first came out. People claimed that airplanes were inferior to trains, and would never become a mass transit system quite obviously because they were weak, capable of only holding one person at a time, and were relatively slow. Trains at that time were capable of going over 200 MPH, could carry tons upon tons of cargo, and hundreds of passengers. As time passed, airplanes became more specialized and able to travel faster, and further. Now, most people use airplanes or cars to travel long distances, trains are still used, but mainly for cargo where time is not an issue, or in places where airplanes just don't make sense, such as a subway system within a city.
Basically, I'm asking you to actually look at things logically. You can emulate anything you want with a computer using current graphics systems etc, but if you try to use dual 3-D imaging, the system has a bit of a problem distributing the image, and it comes out far more choppy than a normal image (see Occulus Rift) If you had a chip that could render an image, then output the composite angles separately, while completely self-automated, you would get excellent stereoscopic output, but that same system would have problems trying to output to a 2-D screen.
Hence if you actually wanted the best of both worlds, you'd have to have TWO separate graphics cards in every computer. Then You'd have to make individual cards for each new function that might come out, once they become refined. To plug all those different cards and chips into one computer just to "have everything optimum" would make it incredibly clunky, they would probably have to double, if not triple the tower space just to get everything in there.
Or, they could take the route of slightly degraded performance in all functions, by having the chips inside just emulate the processes. The emulation would be slower, and never optimal. Extra CPU would be needed at all times to run what a native chip could do without CPU involvement.
http://youtu.be/dtXupQg77SU Dust to Dust, theme
|