Monkey MAC wrote:
I think instead of these kind of things being customizable, we just have them react to the enviroment.
So say your on a Thermonically Active Planet, there is soot and particulates everywhere, your DOF is limited to say 20m, your suits systems reacts and you get a pulsed grid overlay, that outlines buildings/terrain and enemies that would normally be in your FOV .
Say your on a Desolate Jilted Planet, so there is no star close enough to provide light, you are in pitch black, your suit systems react by giving you a IR night vision, however this can be as much a blessing as it is a curse since, flare effects such as Mass Drivers, Muzzel Flashes, Turret fire can potentially blind you.
Say you are on a Close Proximity Exo-planet, the sun is so bright it will burn your eyes, your suit reacts by turning down the brightness on your helmet feed, however walking in and out of buildings require time for the brigtness to readjust
Then add a few little touches to make it more immersive
Ice on the edges of your visor
Overly defined pixels after a flux, or blinding grenade
Cracks during armour damage
Having the HUD physically load in the first time you spawn
Having the HUD react to clone failure (some flashing red text center screen)
that kinda thing.
Excellent points and outstanding dialogue. Again, the decision on doing this is not mine to make, but the artist in me would find these very fun to figure out and work on. Of course, that in itself isn't reason enough to do them. Designers would have to carefully consider the balance and affects to a player's gameplay.
But possibly speaking, absolutely. I can think of a couple ways to accomplish these 'view modes'. One would be to replace all objects in the scene with another material type--xray, infra, ultra, etc. But a shader alone could be complicated to be accurate. For example, infravision should show heat, but we don't track this information on a per pixel level. We *could* create a different texture map to represent the surface heat of objects, but that seems like it could be expensive. Also we *could* tag vertex colors with a value to represent this, but this would be a fair workload to go back and tag everything. There may be an easier, but less accurate way, which would be to just guess based on some material information, such as the specular power (tells you how much light is bounced off rather than absorbed), emissive value, and the light intensity affecting it. This would probably get you something pretty accurate for things that don't generate their own heat and only absorb from the environment. But yea, having diagnosed this I'd say it could turn into a fair amount of work, and that was just infra-vision.
Cracks and pixels overlay could happen, and we can be flexible how they happen. If we want we could put a 'visor' model in the scene and put the cracks on it, so we can control the depth that the happen at. Maybe some HUD elements are holographic projections and others are part of a screen--this would give us that control. Also we can separate the effect based on armor or shield damage. The challenge will be balancing the variation in effects and dealing with overlapping effects. I imagine there will be some limitation that requires us to group effects and only be able to display one from each group at a time.