Saturday, October 12, 2024

Aftershock Simulator Sprint 3


This is the tutorial video I made for my team demonstrating how to use the objective system I designed this sprint. I’m really happy with how well it worked out and I look forward to getting some UI implemented so that it can fully appear in the game.


I went into designing this system with some idea of what to make. I knew that designers would need to be able to make customizable modular objectives and tell the game levels what order to progress through them in. 

What was a huge help was a comprehensive list one of my team members put together of all the currently planned levels and objectives. This meant I didn’t have to make a universally-flexible system, just one that fit what we needed, which gave me some valuable direction. I looked at the list and broke it down into three categories: clicking on tiles, clicking on UI elements, and taking key input.

Initially I drew up a class diagram to figure out how I wanted my system to work, but I ended up iterating through so many different approaches it mostly just ended up as a brainstorming document. Objectives started more or less as I planned them, with one major exception. I knew I wanted an abstract class so that the three objective types could inherit it, but I ran into a huge problem pretty quickly.

I started by trying to implement TileObjectives (for clicking on tiles in game) and realized the issue: all of the relevant game data is stored in completely different places. TileObjectives needed to grab their info from the SelectionBoxManager, InputObjectives needed to take input from the CameraController, and UIObjectives needed to grab their info from three different layer visibility scripts.

This is why Objectives inherit from Observer—I decided to implement the Observer design pattern to try to tie together these tangled scripts. TileObjectives can observe the SelectionBoxManager in order to make sure they

  1. Only run their check when the player clicks on a tile

  2. Receive the game data needed to see if the Objective was completed


And this system worked awesomely! There were a few small hitches—I had to make some enums for ease of communication—but the last big one was for the UIObjectives. Like I mentioned, each UI element connected to code in a different script, so I made a LayerUIWrapper class that “wraps” together references and shared information for the different relevant UI scripts. 

The final piece of the puzzle was the ObjectiveManager. The first part is pretty simple, it has a list of Objectives where designers can add their Objectives in a certain order, and the manager will progress through them during the level. The magic happens in the attachObjective function, where the ObjectiveManager will Attach Objectives to the appropriate Subjects based on what type of objective they are. I’m really proud of this system because, at least in my mind, it should be super efficient: the game only tracks the current objective, and the objective only looks at one script.


Wednesday, September 25, 2024

Aftershock Simulator Sprint 2

 The first thing I worked on this sprint was the person occupancy icon, a special UI element that fills up with color to indicate the proportion of people in a building out of its maximum capacity.

All the code for getting the icon to fill up had already been written, the challenge for this task was taking code written for the old deprecated UI system and re-implementing it into Vivian’s improved UI system. I really had to get into the weeds of understanding how Vivian’s UI system worked, but once I did, I was able to add a new sprite element to the UI pop-up, grab a reference to it, and add the code that gets it to display the right amount of color fill. 

This feature was possible to calculate because of my previous work from last sprint calculating how many people were in a building. Ultimately I’m really happy with how much success I had finding parity working within the UI system without having to change it much.


With the next task, however, the lack of a designer really started to impact our team. I was initially given a card to fix the search-and-rescue timer that displays the rescue progress over a building. After investigating it, however, I found out that everything was working as intended, it just wasn’t clear to the player how it was supposed to work—it was a design problem, not a code problem. 

The team threw out a couple suggestions for how to better indicate to the player what was happening, but since these ranged from changing the art assets to working with the AI system, it ended up being passed from person to person. Additionally, we discovered that many of these suggestions were impossible with the way the game’s codebase had been structured, so I eventually just decided to add a text popup to the progress bar to describe what’s happening

The major outcome of this was that we realized we would need a designer, especially with the direction the project was going—the majority of our backlog needed specialist interviews before we could implement them, and even still, they didn’t have specs. Fortunately, our professor decided to step up as the project’s designer, especially since he plans to continue it past our involvement in the project.


This also, however, led to a pivot of our priorities for the project. One of the first tasks I had after this meeting was coming up with some discussion questions so we’ll be ready for a specialist interview in the future. We discussed what we thought would be needed for the project to better focus on its learning objectives, and starting next sprint, we’ll probably start working on more of those major re-designs.

Which is why for my last task of the sprint, I thought it would be worthwhile to try fixing the day/night cycle. It did exist within the project, but was disabled since it wasn’t properly functional.

The first issue to fix was that it wasn’t connected to the game’s in-game clock. After reading through the code on how the various timer and clock functions work, I was able to add another module to the clock that calls the day/night cycle to update with time, passing the current time. From there, the day/night cycle uses a bunch of math to figure out how bright the sun should be and what angle it should be at. I wrote a function that turns the hour/minute time into minutes, divides it by the number of minutes in a day, converts it to radians so i can take the cosine of the function, and then invert that so that the sun gets brighter and brighter, hits its peak at mid-day, and then gets darker.



There’s a similar piece of code with the moon lighting to keep things not too-dark at night, although it probably needs some fine tuning. I went ahead and implemented it to be functional, then I plan to show it to my team, get some feedback, and make a few adjustments—since taking things out is way easier. In particular, I’m worried about the performance tanking due to the dynamic shadows, but because of the way it’s designed it’s super easy for me to disable them if they’re too costly.


This sprint had me doing a surprising amount of problem-solving, but now that it’s over I think the team is in a much better spot having a designated designer and with our collaborations on the plans for the direction of the project. Overall, I think I made some relevant quality-of-life improvements to the game’s UI and visual elements. Looking into the next sprint, we have a much better idea of what we’re trying to make, and I can look forward to being able to implement the changes we’ve decided on.




Thursday, September 12, 2024

Aftershock Simulator Sprint 1

For this semester, I’ve been assigned to a special project making a training tool for the United States Geological Survey that models how earthquakes could affect San Francisco. A different team worked on the project for nine months and then passed the torch to our team to complete it.


Here’s what we’re starting with. This project is totally unlike anything I’ve ever had to work on before, and this first sprint was extremely challenging. The first hurdle we faced was having to do lots and lots of reading through scripting and documentation to understand how the project had been assembled. Especially since the old team suddenly and unexpectedly lost their funding halfway through development, one can imagine how organized a project is left under those conditions. Nevertheless, it’s a really impressive system they’ve set up.


One strategy I initially tried to employ when going through the game scripts to try to find what I was looking for was having my own note document where I could write summaries of the scripts, map connections between them, and mark which one’s I’d looked at and which ones I hadn’t yet. While this somewhat helped me understand the codebase, it was rather time inefficient.


The most important move I ended up making was that when I setup a new install of Unity and Visual studio to work from my laptop instead of my desktop, it came with an updated package that lets me click on an implementation of a class or struct in the code and find its definition and references. This was something I’d never had to think about before—since in all the projects I’d made before I never had an issue finding my definitions—but it was vital to working effectively in somebody else’s project.



Ultimately, learning to use this feature was the most important thing I learned, and after a lot of hard work, I managed to get my first assigned feature implemented! I modified the existing code to display a building’s details when clicked on to additionally show how many people are in the building. First, I needed to add a reference to the RuntimeNavigationNetwork which contains a dictionary called TileNodes that lets me input the clicked-on tile and output an AI node. Not shown in the image, but I also had to add the RuntimeNavigationNetwork as a library so that I could reference the TransportNode data class for the nodes. Then, I feed that to the AIMapData’s PeopleNumDic dictionary to find how many people are at the selected node. Finally, I return that back to the UI display, which displays the number of people in the building.


I’m somewhat disappointed in myself that this is all I have for tangible progress, but I hope that the description of the process shows just how much work went into it. The majority of the battle was trying to figure out where all this data was stored in the codebase, and then figuring out how to link up those wildly different systems.  and with this first step up the mountain, we’ve got some momentum going forwards.


Wednesday, August 28, 2024

Saturday, May 25, 2024

270 Final Post

 Some screenshots of the final version of the level developed for this class:














Thursday, May 16, 2024

CAGD 373 Sprint Review 2

    Sprint 3 (4/10–4/16), my second one, was relatively relaxed compared to the mountainous first sprint. During this sprint I UV-d and textured the mounted turret. The UVs required a few topographical touch-ups, but overall didn’t prove to be too difficult, just a little time consuming. When it came to the texture, I took a lot of advantage of normal and height maps to get the details I wanted, such as the vents on the barrel of the gun.

With this texture I also got to use some brush layers. I used a simple brush to put some dark scorch marks on the front of the gun, to give the impression of its use. I also used a brush to put the tan paint on the handles of the gun. Inspired by how physical artists give the impression of texture and wear, I used a sponge-y eraser to make the paint look chipped away with wear. The last element of the texture was using an alpha to put some text on the gun where appropriate.

I didn’t do as much work during this sprint as the others, but I probably needed a break after how much work the first sprint was for me. I recognize that having a large team was the reason I could spend so much time focusing on a single model—however, it was certainly a model that require that level of attention. Overall I took on some more general work going forwards, and everything worked out alright as we got all the modeling and texturing done with time to spare.


CAGD 373 Game Scene Review 1

 

Since the first sprint was just getting into groups and deciding on a game scene, the “first” sprint where I completed cards was actually sprint 2 (4/3–4/9). During this sprint I had a single yet goliath task: modeling the mounted turret. 

Since I’ve never played Halo Reach before, the first step for me was collecting reference. I ended up using a lot of different references for this model because it looks different from the different angles. I tried to find a good side-view of the left and right sides, a three-quarter view, a top view, and even a back view. While these were immensely helpful, I still had a hard time telling how the inner cylinder of the gun was supposed to be shaped, since in all renderings it’s obscured by the shadows.

This was my first time making a prop this complex, and trying to make it all as one mesh definitely slowed things down. On my future props of a similar complexity, I made a blockout first before tackling them, just making sure each piece lines up on their seams. That being said, I’m still super proud of how the mounted turret turned out. I think I managed to capture the important details of the original model in a recognisable way. While studying the Halo turret, I noticed their attention to detail with how the gun parts slot together, and tried to follow that along to make sure I got each part. It was still very time consuming and I was quite stressed, but I learned a lot that would make my models better going forwards—and I ended up with a pretty cool model.

Featured Post

ProcGen FPS Update -- 8/28/2024

  The level is procedurally generated each time the program is run.