9 min read

Build Your Own Serverless: part 3

Build Your Own Serverless: part 3
Photo by Desola Lanre-Ologun / Unsplash

In the part 2 of the build your own serverless series, we learned how to Modularize our project and how to make it loosely coupled, so it is easy for us to extend our project.

In this post we will be adding crucial features and enhancements to our project:

  • Add Sqlite as a storage for service definitions using Gorm
  • New hostname allocation strategy.
  • Use docker go client instead of exec.Command.
  • Add graceful shutdown to clean up resources.

At the end of this article, we will be providing a link to the associated GitHub repository. This will enable you to explore and interact with the content further.


So far, we have constructed:

Serverless diagram

SQLite Storage:

SQLite will empower our admin service with the capability to store service definitions and effectively manage data persistence, facilitating easier data retrieval and manipulation operations.

Let's first share what our repository interface looked like:

type ServiceDefinitionRepository interface {
	GetAll() ([]ServiceDefinition, error)
	GetByName(name string) (*ServiceDefinition, error)
	GetByHostName(hostName string) (*ServiceDefinition, error)
	Create(service ServiceDefinition) error
}

Previously, we implemented an in-memory version of this repository. While not disposable, it indeed possesses utility, particularly in the context of our unit testing procedures.

This is the SQLite repository implementation:

// create type for the sqlite repository that will implement the ServiceDefinitionRepository interface
type SqliteServiceDefinitionRepository struct {
	db *gorm.DB
}

// create a constructor for the sqlite repository
func NewSqliteServiceDefinitionRepository(db *gorm.DB) ServiceDefinitionRepository {
	db.AutoMigrate(&ServiceDefinition{})
	return &SqliteServiceDefinitionRepository{db: db}
}

// implement the GetAll method
func (r *SqliteServiceDefinitionRepository) GetAll() ([]ServiceDefinition, error) {
	var services []ServiceDefinition
	result := r.db.Find(&services)
	if result.Error != nil {
		return nil, result.Error
	}
	return services, nil
}

// implement the GetByName method
func (r *SqliteServiceDefinitionRepository) GetByName(name string) (*ServiceDefinition, error) {
	var service ServiceDefinition
	result := r.db.First(&service, "name = ?", name)
	if result.Error != nil {
		return nil, result.Error
	}
	return &service, nil
}

// implement the GetByHostName method
func (r *SqliteServiceDefinitionRepository) GetByHostName(hostName string) (*ServiceDefinition, error) {
	var service ServiceDefinition
	result := r.db.First(&service, "host = ?", hostName)
	if result.Error != nil {
		return nil, result.Error
	}
	return &service, nil
}

// implement the Create method
func (r *SqliteServiceDefinitionRepository) Create(service ServiceDefinition) error {
	result := r.db.Create(&service)
	if result.Error != nil {
		return result.Error
	}
	return nil
}

Utilizing gorm has greatly simplified the process of writing this repository, resulting in minimal modifications to our ServiceDefinition structure.

type ServiceDefinition struct {
	gorm.Model
	Name      string `json:"name" gorm:"unique"`
	ImageName string `json:"image_name"`
	....
}

Hostname Strategy:

Instead of manually inputting a fixed set of five hostnames into our /etc/hosts file, we plan to generate a total of 101 hostnames to include in the file. Subsequently, we will adjust our code, ensuring that the ServiceDefinitionManager selects from these available hosts randomly. This setup more accurately reflects the operations of a genuine serverless platform.

Our /etc/hosts would be similar to:

127.0.0.1       admin.cless.cloud
127.0.0.1  app-0.cless.cloud,app-1.cless.cloud,app-2.cless.cloud,app-3.cless.cloud,app-4.cless.cloud,app-5.cless.cloud,app-6.cless.cloud,app-7.cless.cloud,app-8.cless.cloud,app-9.cless.cloud,app-10.cless.cloud,app-11.cless.cloud,app-12.cless.cloud,app-13.cless.cloud,app-14.cless.cloud,app-15.cless.cloud,app-16.cless.cloud,app-17.cless.cloud,app-18.cless.cloud,app-19.cless.cloud,app-20.cless.cloud,app-21.cless.cloud,app-22.cless.cloud,app-23.cless.cloud,app-24.cless.cloud,app-25.cless.cloud,app-26.cless.cloud,app-27.cless.cloud,app-28.cless.cloud,app-29.cless.cloud,app-30.cless.cloud,app-31.cless.cloud,app-32.cless.cloud,app-33.cless.cloud,app-34.cless.cloud,app-35.cless.cloud,app-36.cless.cloud,app-37.cless.cloud,app-38.cless.cloud,app-39.cless.cloud,app-40.cless.cloud,app-41.cless.cloud,app-42.cless.cloud,app-43.cless.cloud,app-44.cless.cloud,app-45.cless.cloud,app-46.cless.cloud,app-47.cless.cloud,app-48.cless.cloud,app-49.cless.cloud,app-50.cless.cloud,app-51.cless.cloud,app-52.cless.cloud,app-53.cless.cloud,app-54.cless.cloud,app-55.cless.cloud,app-56.cless.cloud,app-57.cless.cloud,app-58.cless.cloud,app-59.cless.cloud,app-60.cless.cloud,app-61.cless.cloud,app-62.cless.cloud,app-63.cless.cloud,app-64.cless.cloud,app-65.cless.cloud,app-66.cless.cloud,app-67.cless.cloud,app-68.cless.cloud,app-69.cless.cloud,app-70.cless.cloud,app-71.cless.cloud,app-72.cless.cloud,app-73.cless.cloud,app-74.cless.cloud,app-75.cless.cloud,app-76.cless.cloud,app-77.cless.cloud,app-78.cless.cloud,app-79.cless.cloud,app-80.cless.cloud,app-81.cless.cloud,app-82.cless.cloud,app-83.cless.cloud,app-84.cless.cloud,app-85.cless.cloud,app-86.cless.cloud,app-87.cless.cloud,app-88.cless.cloud,app-89.cless.cloud,app-90.cless.cloud,app-91.cless.cloud,app-92.cless.cloud,app-93.cless.cloud,app-94.cless.cloud,app-95.cless.cloud,app-96.cless.cloud,app-97.cless.cloud,app-98.cless.cloud,app-99.cless.cloud,app-100.cless.cloud

Then we update ServiceDefinitionManager to pick from this list:

const HostNameTemplate = "app-%d.cless.cloud"

type ServiceDefinitionManager struct {
	repo  ServiceDefinitionRepository
	hosts map[string]bool
	mutex sync.Mutex
}

func SetOfAvailableHosts() map[string]bool {
	hosts := make(map[string]bool)
	for i := 0; i <= 100; i++ {
		hosts[fmt.Sprintf(HostNameTemplate, i)] = true
	}
	return hosts
}

func NewServiceDefinitionManager(repo ServiceDefinitionRepository) *ServiceDefinitionManager {
	hosts := SetOfAvailableHosts()
	sDefs, err := repo.GetAll()
	if err != nil {
		panic(err)
	}
	for _, sDef := range sDefs {
		delete(hosts, sDef.Host)
	}
	return &ServiceDefinitionManager{
		repo:  repo,
		hosts: hosts,
		mutex: sync.Mutex{},
	}
}

Upon the creation of the ServiceDefinitionManager singleton, the code generates a set of available hosts. It then removes any hosts from this set that have already been reserved.

The structure of our singleton remains largely unchanged, with the exception of the RegisterServiceDefinition method:

func (m *ServiceDefinitionManager) RegisterServiceDefinition(name string, imageName string, imageTag string, port int, host string) error {
	m.mutex.Lock()
	defer m.mutex.Unlock()
	service := ServiceDefinition{
		Name:      name,
		ImageName: imageName,
		ImageTag:  imageTag,
		Port:      port,
	}
	if host != "" {
		service.Host = host
	} else {
		h, err := m.NewHostName()
		if err != nil {
			return err
		}
		service.Host = *h
	}
	err := m.repo.Create(service)
	if err != nil {
		return err
	}

	delete(m.hosts, service.Host)
	return nil
}

func (m *ServiceDefinitionManager) ListAllServiceDefinitions() ([]ServiceDefinition, error) {
	return m.repo.GetAll()
}

func (m *ServiceDefinitionManager) GetServiceDefinitionByName(name string) (*ServiceDefinition, error) {
	return m.repo.GetByName(name)
}

func (m *ServiceDefinitionManager) GetServiceDefinitionByHost(hostname string) (*ServiceDefinition, error) {
	return m.repo.GetByHostName(hostname)
}

func (m *ServiceDefinitionManager) NewHostName() (*string, error) {
	if len(m.hosts) == 0 {
		return nil, errors.New("no more hosts available")
	}
	for host := range m.hosts {
		return &host, nil
	}
	return nil, errors.New("no more hosts available")
}

Docker Go Client:

Although exec.Command has served us well in achieving basic functionality, employing the Docker Go client for building the DockerContainerManager will align more closely with idiomatic practices.

Initially, we need to include the Docker Go client in our DockerContainerManager structure and initiate it as follows:

type DockerContainerManager struct {
	mutex        *sync.Mutex
	containers   map[string]*RunningService
	usedPorts    map[int]bool
	sDefManager  *admin.ServiceDefinitionManager
	dockerClient *client.Client
}

func NewDockerContainerManager(manager *admin.ServiceDefinitionManager) (ContainerManager, error) {
	cli, err := client.NewClientWithOpts(client.FromEnv)
	if err != nil {
		return nil, err
	}
	mgr := &DockerContainerManager{
		mutex:        &sync.Mutex{},
		containers:   make(map[string]*RunningService),
		usedPorts:    make(map[int]bool),
		sDefManager:  manager,
		dockerClient: cli,
	}

	return mgr, nil
}

You may encounter a similar issue as I did when running this on macOS, where I had to create a symlink to /var/run/docker.sock. To do this, you can use the following command. Don't worry if the symlink already exists; the command will not create a duplicate.

ls /var/run/docker.sock || sudo ln -s ~/.docker/run/docker.sock /var/run/docker.sock

The next step involves updating just one method, namely createContainer:

// create container with docker run
func (cm *DockerContainerManager) createContainer(sDef *admin.ServiceDefinition, assignedPort int) (*RunningService, error) {

	image := fmt.Sprintf("%s:%s", sDef.ImageName, sDef.ImageTag)
	ctx := context.Background()
	resp, err := cm.dockerClient.ContainerCreate(
		ctx,
		&container.Config{
			Image: image,
			Tty:   false,
		},
		&container.HostConfig{
			PortBindings: buildPortBindings(sDef.Port, assignedPort),
		},
		nil,
		nil,
		"",
	)
	if err != nil {
		return nil, err
	}

	if err := cm.dockerClient.ContainerStart(ctx, resp.ID, types.ContainerStartOptions{}); err != nil {
		return nil, err
	}

	rSvc := RunningService{
		ContainerID:  string(resp.ID),
		AssignedPort: assignedPort,
		Ready:        false,
	}

	return &rSvc, nil
}

func buildPortBindings(sDefPort, assignedPort int) nat.PortMap {
	portBindings := nat.PortMap{
		nat.Port(fmt.Sprintf("%d/tcp", sDefPort)): []nat.PortBinding{
			{
				HostIP:   "127.0.0.1",
				HostPort: fmt.Sprintf("%d", assignedPort),
			},
		},
	}

	return portBindings
}

Graceful Shutdown:

This refers to a process where an application is given adequate time to complete its ongoing tasks and free up and clean resources before it is finally terminated.

For instance, we would want our application to first halt all the running containers, followed by the shutdown of the server itself. This orderly shutdown process ensures tasks are completed and resources are properly released, thus maintaining system integrity.

Initially, we will introduce a method StopAndRemoveAllContainers() []error to the ContainerManager interface. Following that, we will provide its implementation in the DockerContainerManager.

type ContainerManager interface {
	GetRunningServiceForHost(host string) (*string, error)
	StopAndRemoveAllContainers() []error
}

func (cm *DockerContainerManager) StopAndRemoveAllContainers() []error {
	cm.mutex.Lock()
	defer cm.mutex.Unlock()
	var errors []error
	for _, rSvc := range cm.containers {
		err := cm.dockerClient.ContainerKill(context.Background(), rSvc.ContainerID, "SIGKILL")
		if err != nil {
			errors = append(errors, err)
		}
		err = cm.dockerClient.ContainerRemove(context.Background(), rSvc.ContainerID, types.ContainerRemoveOptions{})
		if err != nil {
			errors = append(errors, err)
		}
	}
	return errors
}

Subsequently, we will need to integrate this method into our main.go file, where we instantiate our server. This integration involves setting up a listener for the interrupt signal:

var containerManager container.ContainerManager
var gormDbInstance *gorm.DB
var err error
var srv = &http.Server{
	Addr: ":80",
}

func main() {
	// logging
	debug := flag.Bool("debug", false, "sets log level to debug")
	flag.Parse()
	zerolog.SetGlobalLevel(zerolog.InfoLevel)
	if *debug {
		zerolog.SetGlobalLevel(zerolog.DebugLevel)
	}

	// sqlite db instance
	gormDbInstance, err = db.NewSqliteDB()
	if err != nil {
		log.Error().Err(err).Msg("Failed to create sqlite db")
		panic(err)
	}

	// admin service/server
	repo := admin.NewSqliteServiceDefinitionRepository(gormDbInstance)
	manager := admin.NewServiceDefinitionManager(repo)
	go admin.StartAdminServer(manager)

	// container manager
	containerManager, err = container.NewDockerContainerManager(manager)
	if err != nil {
		fmt.Printf("Failed to create container manager: %s\n", err)
		return
	}

	// setup http server
	http.HandleFunc("/", handler)
	go func() {
		if err := srv.ListenAndServe(); err != nil && !errors.Is(err, http.ErrServerClosed) {
			log.Fatal().Err(err).Msg("Failed to start http server")
		}
	}()

	// gracefull shutdown
	quit := make(chan os.Signal, 1)
	signal.Notify(quit, os.Interrupt)
	<-quit
	ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
	defer cancel()
	if err := srv.Shutdown(ctx); err != nil {
		log.Fatal().Err(err).Msg("Failed to start http server")
	}
	errList := containerManager.StopAndRemoveAllContainers()
	if len(errList) > 0 {
		log.Error().Errs("errors", errList).Msg("Failed to stop and remove containers")
	} else {
		log.Info().Msg("Stopped and removed all containers")
	}
}

Instead of invoking http.ListenAndServe(...), we opt to create a server instance. This approach allows us to subsequently call a shutdown method on the instance, thereby avoiding an abrupt termination of the server.

As part of this process, you'll notice that we're invoking the StopAndRemoveAllContainers method to effectively clean up any active Docker containers. This aids in maintaining a tidy and efficient system environment.

Conclusion:

The complete code related to this article can be found in the cLess repository on GitHub. For instructions on how to run the code locally, refer to the README file included in the repository under part-3.

Our Build Your Own Serverless project is gradually taking shape as we enhance and supplement it with a range of features. In our next update, we will continue to introduce more features and improvements, further advancing the project towards a stable 1.0.0 status.


It's great that you made it through the entire blog post. The other sections should be interesting as well:

  • Part 1: The Basics (MVP)
  • Part 2: Part-2: Admin Service & Modularity
  • Part 4: version, traffic distribution, env variable, & garbage collection of idle containers.