- each CNPeer is given a unique chan *protocol.Event to pass events to
the service.handleEvents() loop. this is now passed to CNPeer.Handler()
as opposed to NewCNPeer().
- service has basically been rewritten. handleEvents() main loop uses
reflect.SelectCase() now to handle all of the eRecv channels for each
peer
- new protocol Event type: EVENT_CLIENT_CONNECT
- Added service_test.go; blackbox-styled testing like the others.
TestService() starts a service and spins up a bunch of dummy peers
and verifies that each packet sent causes the corresponding packet
handler to be called.
started out as me making a service abstraction..
- db.Player exists again, and entity.Player uses it as an embedded struct
- chunk.ForEachEntity() lets you add/remove entities during iteration now
- removed account related fields from CNPeer
- protocol/pool has been merged with protocol.
use protocol.GetBuffer() and protocol.PutBuffer().
- new protocol/internal/service!
service.Service is an abstraction layer to handle multiple CNPeer*
connections and allows you to associate each with an interface{} uData.
In the future it might also handle a task queue for jobs that
modify/interact with the player's uData, called from service.handleEvents()
- PacketHandler callback type has a new param! uData is passed as well now
- much of loginserver/shardserver is now handled by the shared service
abstraction
- SHARD: NPC_ENTER packets are now sent on player loading complete
rather than on enter.
- added Chunk.ForEachEntity()
- refactored SendPacketExclude() to use it
- Chunk.Entities is now Chunk.entities, which is private.
- Chunk.AddEntity() && RemoveEntity() now lock the chunk mutex
you should be able to view other players and jump around together,
although while testing locally one of the clients would always trigger
the "Some irregularities have been found with your connection to the
server, so your game will be closed" speed check for some reason ???
really not sure, might just be my machine
chunking uhhh works ? kind of, not tested for more than a few seconds
before one of the clients disconnects
- loginMetadata is passed to shards through redis now
- shards announce they're alive via redis.AnnounceShard() which just
populates a hashset keyed 'shards'
- login servers grab the 'shards' hashset and randomly picks a shard to
pass the player to (for now)
- ./service shard && ./service login
- Many new environment variables, check config/config.go for more info.
or for a tl;dr just read the Dockerfile for the required ones
- Shard and login services now run in different processes ! (and
containers?? wooaaah)