There's a lot of material out there on genetic algorithms and A * Search. I decided to just make a little 2D maze and approach the problem two different ways to better understand both algorithms without introducing too much complexity, get familiar with Scala and of course make it a bit fun. I'm going to break this up into three parts to digest things easier.

• Part 1 will be the game
• Part 2 will be the Genetic Algorithm
• Part 3 will be A* Search

# The Maze

Here's a little picture of the maze in case the code looks too ugly on the screen:

``````# = Wall
P = Player
F = Finish
= Floor Tile
# # # # # # # # # # # # # # #
#                           #
#   # # # #   #   # # #     #
#   # # # #   #   # # #     #
#   # # # # # #   #   #     #
# P # # # #       #   #     #
#   # # # # # # # #   # #   #
#   # # #             # #   #
#           # # # #     # F #
# # # # # # # # # # # # # # #
``````

The goal is to get the Player `P` to the finish tile `F`.
The maze has 10 rows and 15 columns.
The Player can move up, down, left and right.
If the Player's next move hits a wall, he does not change positions.

Now, when we eyeball this very simple maze, we can solve it in less than a second. But how do we get the computer to do that and how do we know how efficient it was?

Sure,  I could write some loop or recursive function that keeps moving around empty spaces until it finds the finish. Had the maze been more complex or much larger that might take a long time. It will find *a* solution but it could have traced and retraced the Player's steps many times before finally reaching `F`.

Here's the code for building the board and declaring some constants. The entire code for the Maze game sits inside an object `object Maze{}`. I took the aliasing Array thing from a StackOverflow post because for a once off, it's pretty nice.

``````  /*Maze definitions*/
val ROWS=10
val COLS=15
val WALL='#'
val FLOOR=' '
val PLAYER='P'
val FINISH='F'

val > = Array
val Board: Array[Array[Char]] = >(
>('#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#'), // # = Wall
>('#', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', '#'), // P = Player
>('#', ' ', '#', '#', '#', '#', ' ', '#', ' ', '#', '#', '#', ' ', ' ', '#'), // F = Finish
>('#', ' ', '#', '#', '#', '#', ' ', '#', ' ', '#', '#', '#', ' ', ' ', '#'), //   = Floor (Empty Character)
>('#', ' ', '#', '#', '#', '#', '#', '#', ' ', '#', ' ', '#', ' ', ' ', '#'),
>('#', 'P', '#', '#', '#', '#', ' ', ' ', ' ', '#', ' ', '#', ' ', ' ', '#'),
>('#', ' ', '#', '#', '#', '#', '#', '#', '#', '#', ' ', '#', '#', ' ', '#'),
>('#', ' ', '#', '#', '#', ' ', ' ', ' ', ' ', ' ', ' ', '#', '#', ' ', '#'),
>('#', ' ', ' ', ' ', ' ', ' ', '#', '#', '#', '#', ' ', ' ', '#', 'F', '#'),
>('#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#')
)
``````

### Player Movement

``````  var playerCurrentPosition=(5,1)
var penalties=0
val finishPosition = (8,13)

def movePlayerUp() = {
val futurePosition=(playerCurrentPosition._1 - 1, playerCurrentPosition._2)
movePlayerAndUpdateBoard(futurePosition)
}

def movePlayerDown() = {
val futurePosition=(playerCurrentPosition._1 + 1, playerCurrentPosition._2)
movePlayerAndUpdateBoard(futurePosition)
}

def movePlayerLeft() = {
val futurePosition=(playerCurrentPosition._1, playerCurrentPosition._2 - 1)
movePlayerAndUpdateBoard(futurePosition)
}

def movePlayerRight() = {
val futurePosition=(playerCurrentPosition._1, playerCurrentPosition._2 + 1)
movePlayerAndUpdateBoard(futurePosition)
}

def movePlayerAndUpdateBoard(newPosition:(Int,Int)) = {
if(isWall(newPosition)){
penalties += 1
}

if(isFinished(newPosition) || isFloor(newPosition)) {
Board(playerCurrentPosition._1)(playerCurrentPosition._2) = FLOOR
Board(newPosition._1)(newPosition._2) = PLAYER
playerCurrentPosition = newPosition
}
}
``````

The `penalties` variable will increment each time the player hits the wall. This is used later in the fitness function for the genetic algorithm.

Player movement is just an increment/decrement to the appropriate row/col or x/y in the array of arrays. I used tuples because it seemed appropriate. `._1` would be the row and `._2` would be the column.

``````  def isWall(move:(Int,Int)):Boolean = {
Board(move._1)(move._2)== WALL
}

def isFinished(move:(Int,Int)):Boolean = {
Board(move._1)(move._2)== FINISH
}

def isFloor(move:(Int,Int)):Boolean = {
Board(move._1)(move._2) == FLOOR
}
``````

This just checks if the  move to be made is one of the values we declared earlier.

Finally when our algorithms run and we've moved the player around the board, we want to be able to reset everything back to the beginning:

``````  def resetPlayer() = {
if (playerCurrentPosition!=(5,1)){
movePlayerAndUpdateBoard((5,1))
}
Board(8)(13) = FINISH
penalties=0
}
``````

### Test It Out

Here's the main program which we'll execute:

``````object GameRunner {
def main(args: Array[String]) {
println("Started Game Runner")

Maze.display()

Maze.movePlayerUp()
Maze.movePlayerUp()
Maze.movePlayerUp()
Maze.movePlayerUp()
Maze.movePlayerRight()
Maze.movePlayerRight()
Maze.movePlayerRight()
Maze.movePlayerDown()

Maze.display()

Maze.resetPlayer()

Maze.display()
}
}
``````

Here's the output in IntelliJ. Notice that moving the Player down when there was a wall did not move the Player. Resetting the player put everything back.

Here is the Maze object in its entirety with an added display function:

``````object Maze{
/*Maze definitions*/
val ROWS=10
val COLS=15
val WALL='#'
val FLOOR=' '
val PLAYER='P'
val FINISH='F'

val > = Array
val Board: Array[Array[Char]] = >(
>('#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#'), // # = Wall
>('#', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', '#'), // P = Player
>('#', ' ', '#', '#', '#', '#', ' ', '#', ' ', '#', '#', '#', ' ', ' ', '#'), // F = Finish
>('#', ' ', '#', '#', '#', '#', ' ', '#', ' ', '#', '#', '#', ' ', ' ', '#'), //   = Floor (Empty Character)
>('#', ' ', '#', '#', '#', '#', '#', '#', ' ', '#', ' ', '#', ' ', ' ', '#'),
>('#', 'P', '#', '#', '#', '#', ' ', ' ', ' ', '#', ' ', '#', ' ', ' ', '#'),
>('#', ' ', '#', '#', '#', '#', '#', '#', '#', '#', ' ', '#', '#', ' ', '#'),
>('#', ' ', '#', '#', '#', ' ', ' ', ' ', ' ', ' ', ' ', '#', '#', ' ', '#'),
>('#', ' ', ' ', ' ', ' ', ' ', '#', '#', '#', '#', ' ', ' ', '#', 'F', '#'),
>('#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#', '#')
)

def display() = {
println(s"Player Current Position: \$playerCurrentPosition")
println(Board.deep.mkString("\n"))
}

/*Actual Player positioning and Movement*/

var playerCurrentPosition=(5,1)
var penalties=0
val finishPosition = (8,13)

def resetPlayer() = {
if (playerCurrentPosition!=(5,1)){
movePlayerAndUpdateBoard((5,1))
}
Board(8)(13) = FINISH
penalties=0
}

def movePlayerUp() = {
val futurePosition=(playerCurrentPosition._1 - 1, playerCurrentPosition._2)
movePlayerAndUpdateBoard(futurePosition)
}

def movePlayerDown() = {
val futurePosition=(playerCurrentPosition._1 + 1, playerCurrentPosition._2)
movePlayerAndUpdateBoard(futurePosition)
}

def movePlayerLeft() = {
val futurePosition=(playerCurrentPosition._1, playerCurrentPosition._2 - 1)
movePlayerAndUpdateBoard(futurePosition)
}

def movePlayerRight() = {
val futurePosition=(playerCurrentPosition._1, playerCurrentPosition._2 + 1)
movePlayerAndUpdateBoard(futurePosition)
}

def movePlayerAndUpdateBoard(newPosition:(Int,Int)) = {
if(isWall(newPosition)){
penalties += 1
}

if(isFinished(newPosition) || isFloor(newPosition)) {
Board(playerCurrentPosition._1)(playerCurrentPosition._2) = FLOOR
Board(newPosition._1)(newPosition._2) = PLAYER
playerCurrentPosition = newPosition
}
}

def isWall(move:(Int,Int)):Boolean = {
Board(move._1)(move._2)== WALL
}

def isFinished(move:(Int,Int)):Boolean = {
Board(move._1)(move._2)== FINISH
}

def isFloor(move:(Int,Int)):Boolean = {
Board(move._1)(move._2) == FLOOR
}

}
``````

Next time I'll go into an implementation of a genetic algorithm to find an "optimized" solution to the maze. But if you want to go ahead and take that code and put something in the game runner to solve it please do.

Stay tuned for part 2. I promise it gets much more interesting.