Maitue Mae wrote:
What? Using the computer platform isn't using a jack of all trades. You will be able to custimize EVERYTHING (That means Jack of All Trades btw) on your computer to whatever you want it to be. There is NO limitation. Well, there is. It's your intelligence.
Your argument is so poorly based in the now, that you are unable to use any foresight, aren't you?
Reminds me of an argument regarding airplanes and trains, when airplanes first came out. People claimed that airplanes were inferior to trains, and would never become a mass transit system quite obviously because they were weak, capable of only holding one person at a time, and were relatively slow. Trains at that time were capable of going over 200 MPH, could carry tons upon tons of cargo, and hundreds of passengers.
As time passed, airplanes became more specialized and able to travel faster, and further. Now, most people use airplanes or cars to travel long distances, trains are still used, but mainly for cargo where time is not an issue, or in places where airplanes just don't make sense, such as a subway system within a city.
Basically, I'm asking you to actually look at things logically. You can emulate anything you want with a computer using current graphics systems etc, but if you try to use dual 3-D imaging, the system has a bit of a problem distributing the image, and it comes out far more choppy than a normal image (see Occulus Rift) If you had a chip that could render an image, then output the composite angles separately, while completely self-automated, you would get excellent stereoscopic output, but that same system would have problems trying to output to a 2-D screen.
Hence if you actually wanted the best of both worlds, you'd have to have TWO separate graphics cards in every computer.
Then You'd have to make individual cards for each new function that might come out, once they become refined.
To plug all those different cards and chips into one computer just to "have everything optimum" would make it incredibly clunky, they would probably have to double, if not triple the tower space just to get everything in there.
Or, they could take the route of slightly degraded performance in all functions, by having the chips inside just emulate the processes. The emulation would be slower, and never optimal. Extra CPU would be needed at all times to run what a native chip could do without CPU involvement.