Vrain Matari
ZionTCD Legacy Rising
404
|
Posted - 2012.08.31 20:57:00 -
[1] - Quote
Dane Stark wrote:Ok, I have been a computer programmer for 20+ years professionally and 30+ years for fun. Additionally, I have a university minor in Artificial Intelligence (Comp-Sci major). I tell you this because it is the basis for why I feel I have some potential insight into this "issue."
If I were writing an engine for hit detection, what I would use is a feedback 'neural net'-style learning algorithm that constantly is making adjustments (albeit ever so slight) to all the formulas from game to game in order to 'train' the engine to reach a point of stability and, at the end of the process, you should have a pretty well-defined set of rules (equations) running hit detection. The variables would include everything from hit box dimensions to movement vectors to weapon ranges. Possibly weather, gravity and lighting-condition variables - the list goes on if you think about it.
Anyway, when using a feedback system such as this, what you would experience from game to game (assuming the algorithm is making these minor self-adjustments to its equation parameters after each match) is something like this: sometimes, the hit detection will seem like it is working awesomely and sometimes it will seem like it sucks like a vacuum cleaner. And, of course, you will 'hit' (pun intended) every thing in between. Even more fun is -- your version of "awesome" is someone else's "vacuum cleaner" -- everyone plays differently. Just look at the posts that complain about sensitivity (which, BTW, could be undergoing this very same style of training). These are dynamic systems equations that feedback into themselves, but, how they affect everyone's game play (including the ever changing variable that would be accounted for, "lag-millisec/millisec" or some derivative like that), can be very different.
The point is this. If you are 'training' feedback systems like this, it takes A LOT...and I mean A LOT...of inputs from A LOT of different sources over a fairly long period of time (Beta) to get these formulas 'smoothed out' and stable. By stable I mean, at or to a point where the equations (the variables and all the coefficients & powers modifying these variables) seem to stay the 'fairest' for all types of players in all the various circumstances.
Let's also not forget that, if they are actually using a system like this, there are different models that run better for different "environments". This is a factor that, also, over long periods of time, 'keeps players honest'. So, in some environments, you may be "hot sh*t" and in others, you may suck like that "vacuum cleaner". By keeping different 'sets of equations' that have been trained over time for different planets, CCP can keep everybody 'on their toes'. This helps eliminate stagnation and the ever uber god-like players.
Except those, of course, that truly are worthy of 'Uber Player' status and can adapt to all planetary environments while remaining at the top of the leader boards. Hats off to those folks for sure!
Anyone have any thoughts regarding this? I wonder if this is anywhere in the ballpark of what CCP uses inside the guts of the game, but if they don't, I'd love to explain to them why they should! :-)
If CCP is using something like this I'd be impressed as hell.
I like your approach. I suppose if I was doing it from scratch I'd be tempted to go Bayesian with MAXENT on the priors, and some Monte Carlo to be fairly certain that the state space was reasonably well-sampled. It's an academic point, though, as our 2 approaches would prolly yield similar results in the end.
|