Secret Santa 2024
Originally Posted by torilose View Post
Not true. There are people developing self learning AI´s on their own. There is a guy on youtube for example who developed one for GTA 5 wich is a very complex game. The AI learned to drive.

Sorry to disappoint you, but an AI that can drive in GTA 5 is nothing compared to an AI that can defeat an average player in Toribash. To start with, it doesn't involve another forces beyond AI's capabilities to drive a car. All it needs to recognise is a road, other cars, and their movements (which are fairly straightforward and predictable).

As SkullFuk already mentioned, even a file to barely make a Tori stand on its own already took huge amounts of space. Now if you look at an AI that can fully fight, you're looking at hundreds and thousands gigabytes of data and a supercomputer that can manage all that data.
Originally Posted by SkulFuk View Post
Vox had a WIP script that sorta did it years ago (it controlled both toris), but it took days of leaving the game running before it would figure out staying upright let alone managing to fight properly.

Ideally you could run the program simultaneously on any system which used it and instead of holding the information on each individual computer which ran the program it's sent off the information somewhere to be collected, compared, and condensed and as you said it was from 2011/2012 a lot would changed which could improve the script in some shape of form or even making it a separate exec removing any limitations in lua.

There are hundreds of viable ways of creating a machine learning algorithm which have benefits and negatives for their respective tasks finding the optimal algorithm for creating an AI would reduce it's learning time too at the cost of the time to find this algorithm.
h
Originally Posted by Smaguris View Post
Sorry to disappoint you, but an AI that can drive in GTA 5 is nothing compared to an AI that can defeat an average player in Toribash. To start with, it doesn't involve another forces beyond AI's capabilities to drive a car. All it needs to recognise is a road, other cars, and their movements (which are fairly straightforward and predictable).

As SkullFuk already mentioned, even a file to barely make a Tori stand on its own already took huge amounts of space. Now if you look at an AI that can fully fight, you're looking at hundreds and thousands gigabytes of data and a supercomputer that can manage all that data.

Do some research before you spit in my face like this. I actually have some coding experience and know my shit. "All it needs to recognise is a road, other cars, and their movements (which are fairly straightforward and predictable)." In toribash you have a couple of joints. How is that harder? The AI will try different random moves at first and based on successrate improve certain openers. It doesnt take a lot of space to store that as joint status can be described with just a few chars. After a couple of hundred generations the AI will be able to beat a lot of players without even having to react to anything they do. Then a couple of thousand gen´s later the AI will take enemy movement into account and react.
TB NEXT HYPUU
Originally Posted by torilose View Post
Do some research before you spit in my face like this. I actually have some coding experience and know my shit. "All it needs to recognise is a road, other cars, and their movements (which are fairly straightforward and predictable)." In toribash you have a couple of joints. How is that harder? The AI will try different random moves at first and based on successrate improve certain openers. It doesnt take a lot of space to store that as joint status can be described with just a few chars. After a couple of hundred generations the AI will be able to beat a lot of players without even having to react to anything they do. Then a couple of thousand gen´s later the AI will take enemy movement into account and react.

AI that can make a car drive in GTA V and AI than can competently play Toribash are not really comparable. GTA V driving AI is, in comparison, easy. So much so in fact, that there is literally AI in the source code that drives cars, i.e. the npc traffic you encounter in game. I appreciate that the base game AI is not self-learning, but nor does it need to be, hence I find the comparison unhelpful. The self-learning AI is something of a Rube Goldberg machine.

Chess is perhaps more comparable, at least in terms of number of possible moves, but to beat Garry Kasparov, a literal supercomputer was required. Hence it is not PRACTICALLY possible. Theoretically, certainly. This doesn't even account, as mentioned in a previous post, the fact that Toribash is not a perfect information game, like chess. It has to make predictions.

AHHHHHH IT BURNS
Originally Posted by torilose View Post
Do some research before you spit in my face like this. I actually have some coding experience and know my shit. "All it needs to recognise is a road, other cars, and their movements (which are fairly straightforward and predictable)." In toribash you have a couple of joints. How is that harder? The AI will try different random moves at first and based on successrate improve certain openers. It doesnt take a lot of space to store that as joint status can be described with just a few chars.

Who's spitting what lol, this is a discussion my man, calm your balls.

I have 4 years of coding experience and around 2 of it is game development, and yes, that involves some AIs. The thing you don't understand is how simple and primitive self driving AI is compared to TB fighting one.

To drive a car you need w, a, s, d. 4 controls. It involves no predictability, just recognising the terrain and using those 4 controls.

In Toribash you have 18 joints actually, not a couple. Each joint has 4 states, so to an AI that's 72 controls. I think there is a considerable difference between 4 and 72. And I'm not even including the predictability element of the game which is harder than any amount of controls.

Storing joint status doesn't take much if you look at it like "few characters", but if you really have any programming experience you would know that it's not that simple. As we already established there are 18 joints and 4 states for each. Now the AI has to store info of those 18 joints in their state EACH FRAME for thousands and thousands of games. How much do you think that can take up? The answer is a lot.

Originally Posted by torilose View Post
After a couple of hundred generations the AI will be able to beat a lot of players without even having to react to anything they do. Then a couple of thousand gen´s later the AI will take enemy movement into account and react.

Idk what you're on about but it doesn't belong to this discussion lol. We're talking about present time, not thousands of years later. And even at that you're wrong. It won't take hundreds of generations, it could easily be accomplished from a home computer in 20 years by the rate that technology is evolving.
Generations in the sense of genetic algorithms. ITS NOT LIKE HUMAN GENERATIONS. For gods sake google genetic algorithm you have absolutely no clue what you are talking about.

"To drive a car you need w, a, s, d. 4 controls. It involves no predictability" Thats just simply wrong, because you need to predict where other vehicles are going obstacles surface grip etc etc.

If you think that it takes much space to store the joint states for "EACH FRAME" just open a replay in a text editor. Thats exactly how they are stored and it doesnt take up a lot of space (not more than 100kb for an average aikido game).

"Idk what you're on about but it doesn't belong to this discussion lol. We're talking about present time, not thousands of years later. And even at that you're wrong. It won't take hundreds of generations, it could easily be accomplished from a home computer in 20 years by the rate that technology is evolving. " What are you talking about? My method wouldn´t even take a week.

There is no way you have any coding experience when it comes to machine learning.
Last edited by torilose; Jul 5, 2018 at 12:23 AM.
TB NEXT HYPUU
Ah right, so if you mean algorithms then just say so. No, it's not hundreds. You would need millions or billions of generations to beat a player. You seem to be a bit misinformed about how it works. The most common AI learning is stacking a massive database about every possibility with given controls. At first is literally just spamming every combination possible and seeing what happens. Now let's come back to the chess. In chess there are 400 different positions after each player makes one move apiece. There are 72,084 positions after two moves apiece. Since TB has 18 joints and 4 states each as we established, that's 72^72 of combinations, so in toribash there are 5.3E133 different positions EVERY MOVE. So, it would take billions of generations for an AI to beat a player in Toribash, and even that is very optimistic since I'm still not involving the fact that the machine has to predict other player, which would probably take millions of games to build up big enough database about how typical player acts.

So yes, as you said 100kb for an average aikido game, now let's include the database of moves to predict + the database of all the past moves to reference and you're looking at thousands of gigabytes of data. Can your home computer process that much information in around 20 seconds that the frame allows you to edit your tori? Even a supercomputer would probably need some personalised tweaking to run an AI such as this one.

Btw in GTA you don't need to predict where other cars are going in a way that they are other, hard coded AIs with very predictable patterns.

And you're right, I don't have any experience at machine learning, it's just my own research and common sense.
Last edited by Smaguris; Jul 5, 2018 at 12:50 AM.
I looked into this a while ago. First thing I did was research existing work done and found out some guys put together a pretty solid artifical move maker using genetic algorithms and lua scripts for a research project. Pretty neat, but not what we're after.



My vision was to create a program though that would take in a load of replays and learn from those to be able to play a particular mod for example. The first problem with something like that is just amassing enough data to actually train the model in the first place. I put in a request to get a download of all/any aikido games played that were cached or saved anywhere but apparently nothing like that exists.

So we're stuck at the first hurdle there.
Two possible solutions to kick off that first step:
1. Community source the replays. Get as many people as possible to upload all their replays that meet the parameters of what we're looking for. The main issue is that there probably still wouldn't be near enough data and it requires effort for people to contribute. Even if you made a script to crawl the replays section and grab all replays and later filter them for desired parameters you'll probably be short.

2. Create a script that connects to the toribash servers and "watches" games being played in rooms. This involves knowing the protocol to connect to the servers and read game data into a usable format that we can then use to train our machine. I did look into this back a bit and I think with a bit of work should be possible. Then let it run, recording games of the mod you want until you have enough data.



Then from there try to see if we can teach the machine.



This is still not at all trivial. Unlike something like chess where one piece moves to one location each turn, Toribash has 2 players moving simultaneously in 3D space with lots and lots of possible combinations each turn.


For each player:
21 joints with 4 states each = 4^21
And then x2 and x2 for hands gripped/ungripped.
That's...a lot of possible states, and just for one player.



I'm not super well versed in machine learning so I can't really comment on how much of a problem that is but my guess would be that it's on the side of making things harder rather than easier.
Of course its possible with appropriate qualifications.There is a lot of AI's with very effective learning program.I think chess is more competitive ,but its nothing for the AI with "Artificial Intelligence"
I was reading a post about the facebook Artificial Intelligence.This program was stopped by the scientists.The AI created his own language wich is very dangerous.
Of course the toribash AI will be way easier than this.

Solax idea is possible but I think its a little harder than this
NO TITLE