November 29, 2009

The Secret of NIMH

...Is an amazing movie. Mrs. Brisby and her family are also the most adorable animated characters that exist.

I hate this stupid sore throat.

November 25, 2009

Posted While On A Bus

Accessing the internet while I'm riding the bus home has to be one of the most awesome things I've done in a while. A pity the UW security is too ridiculous for me to tunnel through with hamachi on my router. But no worries, I'm getting the hell out of that shithole in less then 3 weeks, and then I won't have to worry about taking my clothes back and forth and back and forth... just my laptop.

Whispers are working except the Router plugin for Raknet is apparently not actually supported... but it turns out that it's rather unnecessary except for insane peer-to-peer connections anyway, so I replaced it with an RPC that's working quite nicely. Now I am programming a PHP serverlist that will double as a facilitator for a NAT punchthrough technique, which should remove the issue of we-have-to-use-hamachi-to-connect. Then its off to the most difficult networking task in the entire project - Physics interpolation. My task will be to both get a physics object to update itself over the network in an efficient manner, and I will require a networking interpolation hack in box2D in order to make it work in the first place, which of course must be optimized to ridiculousness. Luckily i think there's a way to put in a negative step into the interpolation function to get it to interpolate backwards, which would solve my security issue, although not the collision problem. I'd have to basically discard all collisions for the reversal. I'm really not sure how to get that to work, especially since I still need to figure out how and which collisions to disregard for the interpolation forward.

Once that works the only remaining physics problems to solve are 1. how to stagger the update packets based on proximity to an active player and 2. how to interpolate complex animated objects. The latter will be done whenever i get around to having complex animated objects, but the former will have to be the result of ongoing optimization and fine tuning.

In other news, I have invented a method of document reconstruction that would allow art programs to recover all information from a drawing-in-progress even in extreme circumstances, such as power outages. Unfortunately while this isn't that difficult to implement, it requires a subtle feature that is implemented from the ground-up, so i wasn't able to code a proof of concept in :C

November 18, 2009

Origin of Bunny

Bunny seems to originate from the Scottish "bun" as a pet word for a rabbit in 1690, although it had previously been used for squirrels in 1587[source]. "Bun" also meant "Tail of a hare" in scottish, but the word might have its roots in either the french "bon" or possibly Scandinavian origin.

In other news, FUCK RAKNET. Seriously, that stupid library blows up if you so much as create one object before another, even when those objects seem to have nothing to do with each other, except for a hidden low level relationship you're never told about and is helpfully ignored in all the tutorials. Even some of the sample code I copied turned out to be fatally flawed. Thankfully i have now passed and successfully processed my first packet. All that means, however, is that I now get to implement 3 different plugins of varying complexity to do something you think would be simple - chatting. After that I have the joyous task of inventing my own interpolation algorithm for box2D, when I have had no experience with physics engines. Awesome.

Meanwhile, there have been no less then 5 sirens outside my window today, my roommate cannot stop playing his horribly broken electric guitar (that's not plugged into an amp, which means it sounds like someones dragging a guitar pick across sandpaper; BROKEN sandpaper), the food still sucks, and I just so happen to get the absolute lowest priority for class registration, and if I can't pull off the miracle of a schedule I've got laid out I'll never get out of this mess. Even then I am sorely disappointed in myself after my work ethic collapsed and I ended up sleeping for almost 12 hours yesterday (and managed to miss a philosophy homework assignment, but no one cares about that anyway). It's taken me 2 days to make 0.1% progress on my game. Fuck.

Sometimes I really wish I could just stop acting like a rational human being and scream everything I want to scream even if I know its wrong and then bawl my eyes out for no good reason.

Also I fixed the stupid comment box text. Its black now and much easier to read. please comment :C

- Get cUser Replicas to work
- Get Remote procedure calls to work
- Use RPC to implement chat using the cUser as a base
- Get the Router to work
- User the router for user specific messages
- Get Brick replica to work
- Define 4 levels of physics serialization
- Finish writing box2D networking interpolation hack
- process packets and implement interpolation on a simple level
- Network physics
- throw bricks
- Implement destructables
- Implement destructable serialization
- Implement a physics callback system

November 17, 2009


The philosophical definitions of people's various stances on the existence of God:

Athiest: There is no god.
Theist: There must be a god.
Agnostic: We cannot know if god exists.

Concurrently, if you are an atheist, you must PROVE that god does not exist. The same thing goes for Theists. This has the interesting implication of poking a giant hole in Pascal's Wager: If there isn't a god and you don't go to church, you cease to exist. On the other hand, if there is a god and you don't go to church, you get sent to hell. Therefore its better to believe in god "just in case." Philosophers don't give a crap about that - philosophers only care about which point of view has a set of logical reasons that can be proved.

Following this logic, I think a huge number of people are actually Agnostics that either choose to believe in Atheism or choose to believe in Theism for their own personal reasons, and yet their logical reasoning is exactly the same. What's the point of this?

We are no longer arguing about the existence of god. We are instead arguing about which point of view is more beneficial to humanity in general?

November 15, 2009

Bad Timing

So about 10 minutes ago I finally looked at today's Writer's Block and realized where all the random people I didn't know were coming from. Unfortunately, this coincided with my exuberant posting about me being the most downloaded artist on CTG Music, which is not exactly the journal entry I'd want the hundred or so people that clicked on the submitted by link to see.

Man that's an insane coincidence. And not one I'm particularly happy about either. I somehow doubt that many people on here are Techno fans, so don't bother listening to my music :P


Player Controlled Avatar Interpolation - The standard method of interpolation, being that given a packet from the server about object x's location, we can place that object back to where it was at the point in time the packet was constructed, freeze the entire simulation, interpolate that physics island forward until the present, disregarding any initial collisions, does not work for client-controlled objects. In my situation, almost the entire world is server-controlled anyway, so for the vast majority of things its a matter of just sending the client updated physics pathways. This works for the creation of new objects too, since we can create the object where it was supposed to be 150 ms ago, then interpolate it forward to the present.

However, the stick in the spokes for this problem is the very fact that player's can move. Because a player avatar's position must be, by definition, at an undefined location at the time that a packet is sent, we have one of two options:

1. Given the updated information on what the player was doing, rewind the player object backwards and re-simulate it.
2. Let the player give us their current position and rotation

The former is computationally expensive, but the latter is prone to hacking. The best solution is to allow both. While the default should be a massively optimized re-simulation, if people are having a LAN party or just hosting their own server, they probably won't have to worry about hackers in the first place and can disable it to save CPU time. On the other hand, if its a dedicated server, chances are it can afford to make those additional calculations anyway.

November 14, 2009

Most Downloaded Artist on CTG Music

...Is me


November 12, 2009

What Games Require

You know somethings going wrong when the RakNet examples for chat networking can't connect to each other over a LAN :\

//- Figure out weird-ass networking problem
- Get Remote procedure calls to work
- Build cPlayer struct for player info passing
- Use RPC to implement chat
- Go back and build a packet filtering system
- Define 4 levels of physics information
- Finish writing box2D networking interpolation hack
- process packets and implement interpolation on a simple level
- Network physics
- throw bricks
- Implement destructables
- Implement a physics callback system
- Use this to implement impact damage based on relative physics formulas
- Sync destructables using RPC calls
- Put in health bars, network names, and other information
- Sync all this, including rudimentary score information as held by the server
- Build a functional basic shape editor
- Implement protocol buffers
- allow testbed activation on editor using in-game logic
- Build weapon system core
- Implement inventory
- Implement basic grappling gun
- Give GUI basic functionality (Weapon ammo tracking + health, etc.)
- Build options window and ensure most graphics options are functional
- Sync spawned weapon objects
- Build property-based weapon creation system
- Build weapons editor
- Design and implement weapon-centric distribution system
- Design weapon deadliness algorithm
- Implement weapon hashing and self-correcting danger network handling
- Implement anti-cheating weapon designs (weapon combination blacklist too)
- Differentiate between weapon restricted servers and open weapon servers
- Add LUA scripting core
- Integrate into weapons
- Extend weapons editor
- Implement complex object handling system
- Extend physics syncronization to handle complex objects
- Implement 2D nearest neighbor algorithm
- Test interpolation for complex object special cases
- Design complex object animation and syncronization schemes
- Extend weapons to allow for complex objects
- Extend editor to account for complex objects in generic cases
- Extend editor to handle basic animations for complex objects in generic cases
- Implement FX system
- Extend animation editor to handle animations for FX special cases
- Build specialized physics model for client-side FX.
- Integrate FX system into weapon subsystems and physics response system on a generic basis
- Make explosions
- Design hovering situation special-case for physics response system
- Apply this to giant hovering bases
- Ensure large physics object special-case in physics response system is stable
- Adapt 2D nearest neighbor algorithm for 2D lights
- Ensure lights act appropriately in indoor environments
- Implement powerups (including special-case physics response)
- Extend inventory to handle items on an abstract interactive basis
- Extend GUI into final mockup
- Implement unique kill registers for physics callbacks as dependent on weapon type/class/ID, as well as for specific event IDs
- Implement adaptive animation overloading system for complex avatars
- Ensure proper death animation as well as weapon swapping
- Abstract out the entire avatar into a class-system that must adapt for different body shapes.
- Implement class-specific statistics
- Create generic statistic trackers
- Build an interaction response system
- Combine interaction system with complex objects to create a generic vehicle class
- Convert base into a vehicle
- Build vehicles
- Implement vehicle spawn system and vehicle generic handling
- Build adaptive GUI system
- Create specialized vehicle GUI modifications
- Implement Map handling system
- Build map object spawn factory
- Network dynamic map changes
- Integrate LUA core into map scripting
- Compile list of basic map triggers
- Migrate objects over to map object handling
- Allow for multiple situational physics layers on base
- Get that stupid elevator to work
- Implement aircraft as a vehicle subset (this requires a physics response special case)
- Create Resource System
- Modify all spawned upgrades, powerups, vehicles and weapons to have generated resource costs.
- Implement drops
- Implement team resource counter as well as individual resource sharing systems
- Sync these over the network and apply anti-cheating subsystems
- Implement generic multiplayer statistic tracking over the client/server model
- Create the Lobby
- Add rooms
- Build server tracking system using the superserver
- Implement anti-cheating core on superserver and its authorization channels
- Ensure there are sufficient game creation options
- Test initial join and in-game join combinations
- Implement multiplayer statistic tracking over the entire superserver model and website (concept of a 'confirmed kill')
- Website integration
- Implement Clans
- Implement Medals
- Implement Ranks
- Vent support
- Further website integration
- Finalize ambient music tracks
- Final design overview
- Final polish
- Push to upload
- Design final commercial trailer

November 10, 2009

Physics-oriented Network Interpolation

Syncing a game over a client server connection is not an easy task. It's actually extraordinarily difficult and is almost completely reliant on the quality of interpolation. Due to the nature of interpolation, it gets exponentially more inaccurate the more time is spent doing it. Therefore, a game designer should want to minimize the amount needed. This is not an easy task, but it usually involves using the server as a middleman to halve the interpolation time. The only issue with this is that the server's interpolation becomes reality, so while a client interpolation can be fairly inaccurate and simply corrected later on, the server's world is reality.

This has two consequences, one is fairly obvious: the server interpolation between the time the player hit the move button and their current location must be extremely accurate, while the client interpolation can be fairly sloppy. If however, the server is a dedicated server (or has a parallel physics world), then a small trick can be employed - the server's physics world need only be updated with every physics packet received, enabling an accurate physics simulation for a fraction of a second and eliminating a small amount of interpolation. An additional measure can be taken by utilizing a peer-to-peer connection between the clients and sending tiny packets of button notifications. These, if they happen to get to the client before the server packet does, can improve perceived responsiveness by giving the client a heads up on whether a player has fired something.

Another possible method of interpolation involves knowing where a shot should be and where it is currently in the view of a network player, and accelerating it until it reaches its intended destination. This is, however, problematic in terms of multiplayer shooter games because a shot quite often will hit another player or some object within the timespan of the ping and subsequently cause massive confusion on part of the player who thinks his shot is on the bottom of the screen rather then the top. In cases where accuracy is crucial, it is often best to simply have shots appear "out of nowhere" in front of the player that shot them, but play the shooting animation at the same time. This visual feedback triggers an instinctual "oh its lag" response, instead of "where the hell did that shot come from."

All of these methods are applied in anticipation of a perfect interpolation function, which is of course impossible. Hence, while we can mitigate most of the problems that exist even with a perfect interpolation problem, it comes down to simply coding a better, faster interpolation function.

November 6, 2009

Nearest Neighbor Algorithms for 2D and 3D lighting

When doing lighting calculations for a graphics engine in either a 2D or a 3D context, one must find all the nearest objects within a given radius of a lightsource, as those are the only ones that you need to bother rendering things for. Done correctly, such an algorithm would allow for an arbitrarily large number of lights across an extremely large plane, with no real loss in preformance provided that no object was lit by more then say, 20 sources. As my physics-oriented friend pointed out, AI researches have been inventing ridiculously fast nearest neighbor algorithms for n-dimensional space for decades. The very thought of me even attempting to do my own little optimization of this is stupid when there's such astounding amounts of stupidly fast, free algorithms out there written by mathematical geniuses that can do it for me.

And yet, this is rarely, if ever, actually used in graphics. Graphics programmers seem to operate under the assumption that there are so few light sources (10 or less) in a given scene that there's therefore no point in implementing a superefficient light culling algorithm, since the gains could easily be transcended by improving the speed of the lighting calculation itself. While this theory holds water in most modern lackluster games, it seems to eschew the power of short-range lighting. Lots of very small lights can add a huge amount of atmosphere to a scene, and it doesn't need to be particularly computationally expensive as long as you cull the lighting radius to a set number of objects really really fast. The lighting calculations themselves, since they are only done on 4-5 objects at once as opposed to 500, are not an issue at all. At least, they aren't as long as your allowing for arbitrary n number of light sources.

Once again, a potentially helpful mathematical algorithm is completely ignored by the graphics community, despite its ease of use and obvious helpfulness.

(I probably would have written more on this topic except i just got an idea for a pixel shader xD)

November 3, 2009

Proving Strong AI

I stumped my philosophy professor today. We were discussing the Chinese Room Thought Experiment and how it proves that a Strong AI is impossible (a strong AI being a thinking machine). The experiment is based on the idea that someone is sitting in a room and Chinese symbols are handed to him. He uses a rulebook that tells him what characters to write in response to the symbols given to him, and he understands neither the input nor the output. Due to this, anyone standing outside the room would think that the person inside the room understands Chinese, even though he clearly does not.

There are two glaring issues with this argument. The first is very simple - Understanding cannot be defined by philosophers. We will ignore that for now.

The second is what I stumped my philosophy teacher with - We learn and use language with a giant dictionary of words and their meanings, along with a set of rules of grammar. How is this in any way different from a guy in a room with a giant rulebook? If this thought experiment is correct, all Searle has succeeded in doing is proving that humans understand nothing. Hence, the thought experiment doesn't prove anything at all, because humans obviously do understand things or I wouldn't be objecting to us understanding things.

So what is incorrect about this thought experiment? This leads us back to the first problem - what is Understanding? If we cannot differentiate between using a giant book of rules and actually understanding something, then my flimsy little laptop can "understand" the English words I'm typing and correct them for me, which is not true.

Hence, we are inevitably led to the problem of understanding. What differentiates following a bunch of rules and understanding a concept? The answer is simple: experience. Our experiences allow us to attach significance to symbols that would otherwise be totally meaningless to us. Someone can tell you an elephant is huge, but if you've never seen anything larger then a 10 foot tall tree, you won't understand what that means.

This means that the Chinese room experiment succeeds in proving something painfully obvious: No, the man in the room doesn't understand anything. Sadly, this conclusion has no significance whatsoever. In fact, if we modify the experiment to say that the man is using all his previous experience and knowledge to try and interpret the symbols, and succeeds at doing so, then by definition he will understand the concept and the outside observer will be correct in thinking that he does.

By defining understanding as experience of an abstract concept, we can therefore identify the crucial difference between faking that you understand something, and actually understanding something - having an experience of it.

Now, we can construct an alternate version of the Chinese Room thought experiment, where we have a robot that can sense the world around it (sight, smell, touch, taste, hear). This robot responds to stimuli based on a set of rules that its programmed to follow. To an outside observer, the robot would appear to act human, and to understand concepts. There are 2 possibilities:

1. The robot understands nothing and is simply using a very advanced rulebook to tell it what to do.

2. The robot does in fact, understand what is going on, and by extension is therefore a thinking, conscious being.

With our new definition of "understand," we are now able to differentiate between these two situations. If in one situation, the robot is like today's robots and cannot store memories or experiences, then it is not a thinking, conscious being and cannot understand what it is doing.

If, however, the robot CAN store memories and experiences, and it is capable of assigning these memories and experiences to the abstract definitions in its rulebook, then it is capable of gaining an understanding of the world around it, and hence is a conscious, thinking being.

So what separates us from an extremely advanced robot?


In direct contradiction to Searle's argument, a Strong AI must exist because human beings, by definition, are Strong AI's. If Strong AIs are impossible, we are impossible. To prove this wrong, a philosopher will have to somehow differentiate between a robot understanding something and an organic being understanding something. If one cannot do that, then we come to the inevitable conclusion that science has been trying to tell us for decades - the human brain is a giant, organic computer.