Kafkaesque streams built on firebase
Design and prototyping. Not suitable for any kind of usuage.
firestream is designed to provide kafakaesque streams with minimal hassle for pico-scale applications or MVPs. Once your application is no longer pico-scale using
firestream is a great way to ensure bad things happen.
firestream is aimed to give your application (or business) the runway grow enough to be able to absorb the operational cost of
At both rest and in transit all messages are stored as stringified
firestream itself works in
Partitions are not provided for in
firestream. If you really really really need partitions, it's probably time to switch to
firestream allows for multiple consumer and producers. It however only allows for consumer groups with one consumer. If you really really really need more than one consume in a consumer group, it's probably time to switch to
There is only one broker, that broker is your firebase instance. There is only cluster
The design of
firestream's interface is inspired by pyr's somewhat opinionated client library for
"Once connectivity is reestablished, we'll receive the appropriate set of events so that the client "catches up" with the current server state, without having to write any custom code." - Peeps from Firebase
The theoretical limits* of
firestream (i.e. running it on the biggest machine you can find) are derived by using an 8th of the limits of firebase. For pico-scale applications or MVPs it's unlikely you'll hit the limits of firebase or
firestream. Here they are anyway:
- Maximum system throughput (reads and writes): ~500 per second
- Maximum payload during write: 2MB
- Maximum write speed: ~0.5MB per second
- Maximmum data transfer per read: 200MB
- Maximum number of messages per topic: 9 million
*It's quite likely that you can get more perf than the above, but better safe than sorry.
Firebase usuage limits in case they change
You can grab
firestream from clojars: [alekcz/firestream "0.1.0-SNAPSHOT"].
- You need to create a project on firebase to get started. So do that first.
- Once you've created your project setup a Realtime Database.
- We don't want any frontends or non-admin apps to access our database, as this database will be at the core of our stream. So we need to deny all non-admin access using the firebase securtiy rules. You can use the rules below.
".read": false, //block all non-admin reads
".write": false //block all non-admin writes
json file containing your creditials by following the instruction here https://firebase.google.com/docs/admin/setup
Set the GOOGLE_CLOUD_PROJECT environment to the firebase id of your project e.g. "alekcz-test"
Set the FIREBASE_CONFIG environment variable to the contents of your
json key file. (Sometimes it may be necessary to remove all the line breaks and wrap the key contents in single quotes to escape all the special characters within it. e.g. FIREBASE_CONFIG='
You're now good to go.
firestream API has 5 functions.
producer: Create a producer
send!: Send new message to topic
consumer: Create a consumer
subscribe!: Subscribe to a topic
poll!: Read messages ready for consumption
There are no metrics for the moment, but hopefully someday we'll get to the same level as operatr.io.
When you outgrow
firestream and are ready for
kafka, hit up the awesome folks at troywest.com to get you started.
Copyright © 2019 Alexander Oloo
This program and the accompanying materials are made available under the terms of the Eclipse Public License 2.0 which is available at http://www.eclipse.org/legal/epl-2.0.
This Source Code may also be made available under the following Secondary Licenses when the conditions for such availability set forth in the Eclipse Public License, v. 2.0 are satisfied: GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version, with the GNU Classpath Exception which is available at https://www.gnu.org/software/classpath/license.html.