started out as me making a service abstraction..
- db.Player exists again, and entity.Player uses it as an embedded struct
- chunk.ForEachEntity() lets you add/remove entities during iteration now
- removed account related fields from CNPeer
- protocol/pool has been merged with protocol.
use protocol.GetBuffer() and protocol.PutBuffer().
- new protocol/internal/service!
service.Service is an abstraction layer to handle multiple CNPeer*
connections and allows you to associate each with an interface{} uData.
In the future it might also handle a task queue for jobs that
modify/interact with the player's uData, called from service.handleEvents()
- PacketHandler callback type has a new param! uData is passed as well now
- much of loginserver/shardserver is now handled by the shared service
abstraction
- SHARD: NPC_ENTER packets are now sent on player loading complete
rather than on enter.
- lol, i had no idea python *FINALLY* added a switch/case equivalent.
sadly though, it looks like the generated python bytecode is
nearly identical in performance.
there's no lookup table magic for match/case, and is almost
identical to if/else if/else. it amounts to syntax sugar 😭
- our db_test tests now use psql version 15 (which is the same
version our docker-compose file uses) for consistency.
- also added another test, TestDBPlayer
- added Chunk.ForEachEntity()
- refactored SendPacketExclude() to use it
- Chunk.Entities is now Chunk.entities, which is private.
- Chunk.AddEntity() && RemoveEntity() now lock the chunk mutex
this fixes a race condition where if 2 goroutines try to send a packet at the same time, they could end up being
malformed due to the 2 separate calls to peer.conn.Write().
instead of writing the packet size to peer.conn.Write() directly, we make space in buf for the packet size,
and patch it in place. this lets us get away with only having 1 call to peer.conn.Write() which will ensure that
the full packet is written properly and be goroutine safe :3
you should be able to view other players and jump around together,
although while testing locally one of the clients would always trigger
the "Some irregularities have been found with your connection to the
server, so your game will be closed" speed check for some reason ???
really not sure, might just be my machine
chunking uhhh works ? kind of, not tested for more than a few seconds
before one of the clients disconnects
- loginMetadata is passed to shards through redis now
- shards announce they're alive via redis.AnnounceShard() which just
populates a hashset keyed 'shards'
- login servers grab the 'shards' hashset and randomly picks a shard to
pass the player to (for now)
- ./service shard && ./service login
- Many new environment variables, check config/config.go for more info.
or for a tl;dr just read the Dockerfile for the required ones
- Shard and login services now run in different processes ! (and
containers?? wooaaah)
- the shard's ip sent to the client via the login server is currently hardcoded in config/config.go
- no packets other than P_CL2FE_REQ_PC_ENTER and P_CL2FE_REQ_PC_LOADING_COMPLETE are supported via the shard server