Skip to main content

Posts

Translating a bit of the idea behind domain driven design into code architecture

I've often participated in arguments discussions about whether thin models or thin controllers should be preferred.  The wisdom of a thin controller is that if you need to test your controller in isolation then you need to stub the dependencies of your request and response. It also violates the single responsibility principal because the controller could have multiple reasons to change.   Seemingly, the alternative is to settle on having fat models. This results in having domain logic right next to your persistence logic. If you ever want to change your persistence layer you're going to be in for a painful time. That's a bit of a cargo cult argument because honestly who does that, but it's also a violation of the single responsibility principal.   One way to decouple your domain logic from both persistence and controller is to use the "repository pattern".   Here we encapsulate domain logic into a data service. This layer deals exclusively with implementing
Recent posts

Grokking PHP monolog context into Elastic

An indexed and searchable centralized log is one of those tools that once you've had it you'll wonder how you managed without it.    I've experienced a couple of advantages to using a central log - debugging, monitoring performance, and catching unknown problems. Debugging Debugging becomes easier because instead of poking around grepping text logs on servers you're able to use a GUI to contrast and compare values between different time ranges. A ticket will often include sparse information about the problem and observed error, but if you know more or less when a problem occurred then you can check the logs of all your systems at that time. Problem behaviour in your application can occur as a result of the services you depend on.  A database fault will produce errors in your application, for example. If you log your database errors and your application errors in the same central platform then it's much more convenient to compare behaviour between

Streaming MPEG-TS to your Raspberry Pi

Raspberry Penguin CC by openclipart.org Usually people focus on getting camera streams off their Pi, but I wanted to set up a test environment that would let me play with some encoding and distribution software. For the client in this scenario I'm running a Raspberry Pi 3 Model B which I have plugged into an ethernet cable on my home network. I'm using my Ubuntu 18.04 laptop to serve up the multicast stream. If you're new to multicast the first piece of reading you should do is to read about the reserved IP ranges on Wikipedia . You can't broadcast on just any old address, but you do have a very wide selection to choose from. I decided to broadcast on udp://239.255.0.1 and port 5000, because why not? I downloaded a video from https://www.stockio.com/free-videos/ and saved it into a folder. From there all you need to do is run ffmpeg on the server, thusly: ffmpeg -re -i costa.mp4 -bsf:v h264_mp4toannexb -acodec copy -vcodec copy -f mpegts u

Using Azure Active directory as an OAuth2 provider for Django

Azure Active Directory is a great product and is invaluable in the enterprise space. In this article we'll be setting it up to provide tokens for the OAuth2 client credentials grant. This authorization flow is useful when you want to authorize server-to-server communication that might not be on behalf of a user. This diagram, by Microsoft, shows the client credentials grant flow. From Microsoft documentation  The flow goes like this: The client sends a request to Azure AD for a token Azure AD verifies the attached authentication information and issues an access token The client calls the API with the access token. The API server is able to verify the validity of the token and therefore the identity of the client. The API responds to the client Setting up Azure AD as an OAuth2 identity provider The first step is to create applications in your AD for both your API server and the client. You can find step-by-step instructions on how to register the applications o

Standardizing infra - lxd is not Docker

LXD logo from Ubuntu blog Selling a software package that is deployed to a customer data-centre can be challenging due to the diversity found in the physical infrastructure. It is expensive to make (and test) adjustments to the software that allow it to run in a non-standard manner. Maintaining a swarm of snowflakes is not a scalable business practice. More deployment variations means more documentation and more pairs of hands needed to manage them. Uniformity and automation are what keep our prices competitive. Opposing the engineering team's desire for uniformity is the practical need to fit our solution into the customers data-centre and budget. We can't define a uniform physical infrastructure that all of our customers must adhere to. Well, I suppose we could, but we would only have a very small number of customers who are willing and able to fit their data-centre around us. We are therefore on the horns of a dilemma. Specifying standard physical hard

Is blockchain "web scale"?

For something to be truly awesome, it must be "web scale" right? The rather excellent video just below shows how hype can blind us to the real values of a technology. It's quite a famous trope in some circles and I think is a useful parallel to the excitement about blockchain. In it an enthusiastic user of a new technology repeats marketing hype despite having no understanding of the technical concerns involved. Watch the video and you'll hear every piece of marketing spin that surrounds NoSQL databases. I usually get very excited about new technologies but I'm quite underwhelmed by blockchain. In the commercial context I see it as a great solution that really just needs to find a problem to solve. Lots of companies are looking for the problem that the blockchain solution will solve. Facebook has a blockchain group that is dedicated to studying blockchain and how it can be used in Facebook. They don't seem to have found a particularly novel use ca

Component cohesion

Image: Pixabay Breaking your application down into components can be a useful approach to a "divide and conquer" methodology.  Assigning specific behaviour to a component and then defining interfaces for other components to access it allows you to develop a service driven architecture.  I'm in the process of decomposing a monolithic application into services that will eventually become standalone micro-services.  Part of the task ahead lies in determining the service boundaries, which are analogous to software components for my micro-service application.  I want components to be modular to allow them to be developed and deployed as independently as possible.  I'm using the approach suggested by Eric Evans in his  book on domain driven design  where he describes the concept of "bounded contexts".  I like to think of a bounded context as being for domain models as a namespace is for classes.  These contexts are spaces where a domain model defined in t