Laravel and NodeJS messaging using Redis Pub/Sub

I was recently working on this project that was composed of two different parts: a web application built in PHP with Laravel, and an AWS Lambda function written in NodeJS. In the past, both applications exchanged data using a common MySQL database. With time, this setup showed up very inefficient. As the number of “messages” sent and received increased, the database started to not handling well the volume of reads and writes required to support both “applications” — the Lambda function is not an application per se but you know what I mean, right?

The first thing we tried was changing the database schema to focus on performance, rather than on data integrity. We dropped some constraints and changed how the data was stored to achieve that. The updates soon showed themselves not enough.

In a second iteration, we started playing around with Redis. Due to its nature, a key/value store and not a relational database, it’s a lot faster than MySQL. The first attempt using Redis involved simply moving the data we’re storing into the database to a set. It seemed to work well but just after a few tests on a staging server we realized that approach wouldn’t work for the system needs. When retrieving the data using the SCAN command, the order of returned elements is not guaranteed. And that was an important downside for us, the business logic required us to read the data in the same order it was written.

Finally, we got to the setup we have now: both sides — the web app and the Lambda function — were updated to use Redis Pub/Sub implementation. Laravel supports Redis out of the box, which was a nice thing to have. For the NodeJS part, we used NodeRedis.

Subscribing to a channel

As I mentioned, Laravel already has an interface to deal with Redis. It still needs an underlying client, but most of the operations are pretty straightforward. You may refer to the Laravel docs for more info. Subscribing to a channel requires a single method call:

Redis::subscribe([ 'channel_name' ], function ($message) {
    /* Do whatever you need with the message */

I’m using an Artisan command to start this listener, this way:

class Subscriber extends Command
    protected $signature = 'redis:subscriber';

    protected $description = '...';

    public function handle()
        Redis::subscribe([ 'channel_name' ], function ($message) {

    public function processMessage(string $message)
        /* Handles the received message */
        $this->info(sprintf('Message received: %s', $message));

Now we simply have to trigger the command to start listening to the channel.

You’ll notice that after a minute without receiving any data, the next time the subscriber gets a message an error will be thrown. That’s because the connection timed out. To fix that, we added the following settings to the config/database.php file, inside the "redis" block:

'read_write_timeout' => 0,
'persistent' => 1,

Publishing to the channel

On the NodeJS side, we need the aforementioned library. To install it:

$ npm install redis

After that, we’ll need to write our Lambda function that publishes to the channel. Since the focus is the Pub/Sub flow, I’m not using any particular logic to create the message here, just returning the attribute received with the event.

const redis = require('redis');
const client = redis.createClient();

const handler = (event, context) => {
    const message = processEvent(event);
    client.publish('channel_name', message);
    return context.done(null, {

const processEvent = (event) => {
    /* Handles the event and return the message to publish */
    return event.message;

exports.handler = handler;

Notice I’m not passing any properties to the createClient function. You’ll probably want to set the host or any other custom configuration you have to properly connect to the Redis instance. Check the NodeRedis docs for more info about the available properties.

Testing all together

First, start the Artisan command. If you used the same name from my example above, you should be able to run the following:

$ php artisan redis:subscriber

Then, you have to run your Lambda function to publish messages. You can do that after deploying the code to AWS. Or, you can run it locally with a mockup of the Lambda env. Something like this:

const http = require('http');

// This is where the Lambda function is
const lambda = require('./lambda');

const context = {
    done: (error, success) => {
        if (error) {
            console.error('FAIL:', error);
        console.log('OK:', success);

const server = http.createServer((request, response) => {
    let data = '';
    request.on('data', (chunk) => {
        data += chunk;
    request.on('end', () => {
        if (data) {
            const event = JSON.parse(data);
            lambda.handler(event, context);

server.on('clientError', (error, socket) => {
    socket.end('HTTP/1.1 400 Bad Request\r\n\r\n');


This stub is a very basic mockup of Lambda env. It lacks some better error handling and validation. But for the purpose of this test, it does what we need. I strongly don’t recommend using this code in production, though.

If you named the script above, for instance, as web.js, you should be able to run it:

$ node web.js

And then invoke the function with cURL:

$ curl -d '{"message":"Hello world!"}' http://localhost:3000

The request body (with the -d param in the command) will be parsed as JSON and sent to the Lambda function as the event. If you check the function again, you’ll notice we’re using the message attribute there.

After executing that command, you should see two different outputs in your console. One from the Lambda mockup, which may look like this:

OK: { message: 'Hello world!' }

And another from the Artisan command:

Message received: Hello world!

The output will change in according to the message with the request body.


In this sample code, I showed the basics of Redis Pub/Sub. You don’t necessarily need AWS Lambda to use it. I just wanted to show up a “nearly” real-life use case. Sure, this is still not a real application, but I hope you got the idea.

You may have noticed, but this is a way to build what the cool kids out there call Microservices. If this is all new to you, maybe this is an opportunity to give it a chance and try to build your first distributed application.

Got comments or questions? Feel free to share them below.


Documenting or not

There are lots of people talking about the importance of writing code documentation. Some, otherwise, advocate in favor of not documenting at all. These usually say your code must tell the history by itself. I’d like to make my own statement in this matter: I agree with both sides in this discussion.

I love to write code documentation. And sometimes, no matter how much I try to make my code be expressive, I need the support from something more textual. I guess that’s due to my code-writing style. I prefer to use a concise naming to my symbols. Instead of `getAllOrdersFromUser(user)`, I’d rather to use `getOrders(user)`. Unless the context requires a meaningful name, the latter is enough expressive. It says what the function does — gets orders — and based on what — a user. For this example, writing what the function actually does is not so important. I can then omit the doc block. In other cases, though, there are some logic details that can be hidden behind a weak name. For those, a statement on what that piece of code does may be essential for understanding.

Yet, for some projects, I struggle keeping docs up-to-date. This happens especially when I’m writing something that must be deployed ASAP. Or when working with pairs that don’t care about documenting. You should know, real life not always allows you to do everything in the way you want to. If I don’t plan to keep my code and documentation in sync, I do not write a comment that won’t say a thing about the code next to it. It’s preferable having no doc than something that will confuse other dev or even myself.

There is a place in the middle of these two points of view where the day-by-day programming resides. Writing expressively is the better way to coding. But, if you’re adding documentation keep in my mind you must update it constantly as your logic changes.

Regras de ouro para o trabalho remoto

Enquanto escrevo esse post, calculo que fazem mais ou menos 5 anos que eu trabalho remoto. Nesse tempo, tive uma passagem de 1 ano por um emprego in loco, mas nunca deixei de tocar as minhas empresas em paralelo, atendendo os clientes, planejando e executando tudo que fosse necessário.

Trabalhando de casa ou de espaços de coworking, aprendi na prática muita coisa sobre como se relacionar com os clientes e fornecedores, como planejar, acompanhar e executar projetos, e sobre como se virar quando tudo dá errado, ou chega perto disso. Com a minha recente mudança para os Estados Unidos, trabalhar remoto ganhou um significado muito mais importante, porque se antes eu podia ir visitar o cliente ou dar um jeito de passar um dia ou dois por perto dele, essa alternativa não existe mais.

Somando tudo isso, achei que seria útil escrever um texto com algumas dicas de trabalho remoto, mas aí encontrei esse artigo do Diego Eis, no Tableless, e ele já falou tudo aqui, então aproveitem:
6 dicas para se dar bem em freelas e trabalhos remotos

Propel + Symfony2 : Debugando queries em comandos

Quando no ambiente de desenvolvimento, em um projeto baseado no Symfony2, usar o webprofiler na interface web (a partir da barra que fica no rodapé das páginas) é uma mão na roda em várias situações. Mas no console geralmente não temos essa facilidade tão a mão, porém não é impossível acessá-la. Especificamente para as queries executadas através do Propel, é possível usar o seguinte trecho para fins de debug:

$profiler = $this->getContainer()->get('profiler');
$db = $profiler->get('propel');
$db->collect(new \Symfony\Component\HttpFoundation\Request(), new \Symfony\Component\HttpFoundation\Response()); // Stubs, não são usados pelo profiler

Você pode dar uma olhada na classe Symfony\Bridge\Propel1\DataCollector\PropelDataCollector e conferir os métodos disponíveis.

Outros profilers podem ser acessados através do container, mas como a requisição (request) e a resposta (response) não estão disponíveis no console, pode ser que nem todos funcionem como esperado.

Livro: Código Limpo

Li esse livro faz mais de 1 ano. Havia emprestado para um colega da Gazeta do Povo e, como ele me devolveu essa semana, decidi retornar ao texto por curiosidade — o caminho para casa, de ônibus, é quase sempre dedicado à leitura. Me deparei com um capítulo (17) incrível que eu havia esquecido. Basicamente um resumo de muitos tópicos que o livro trata, com exemplos de código e o que (e não) fazer em cada situação.

Listo abaixo os tópicos relacionados à funções desse capítulo:

Parâmetros em excesso

As funções devem ter um número pequeno de parâmetros. Ter nenhum é melhor. Depois vem um, dois e três. Mais do que isso é questionável e deve-se evitar com preconceito.

Parâmetros de saída

Os parâmetros de saída são inesperados. Os leitores esperam que parâmetros sejam de entrada e não de saída. Se sua função deve alterar o estado de algo, faça-a mudar o do objeto no qual ela é chamada.

Parâmetros lógicos

Parâmetros booleanos explicitamente declaram que a função faz mais de uma coisa. Eles são confusos e devem ser eliminados.

Função morta

Devem-se descartar os métodos que nunca são chamados. Manter pedaços de código mortos é devastador. Não tenha receio de excluir a função. Lembre-se de que o seu sistema de código fonte ainda se lembrará dela.

Caso esse trecho tenha despertado o seu interesse pelo livro, fica uma dica: leia-o no original, em inglês, se você puder. Ou se prepare para uma tradução muito falha e que em alguns pontos requer muito tempo de interpretação e dedução.


Martin, Robert C. – Código Limpo: Habilidades Práticas do Agile Software. AltaBooks, 2009.