Sunday, April 25, 2021

Ngrx Selectors

I look at selectors in Ngrx as queries on the data store. They are the method of getting data from the store to the components in the shape of an observable.

constructor(private store: Store) {
  this.dataobservable =  this.store.select(selectorfunction);
}

The thing to remember is that store.select passes the entire state to the selector function. Lets lay out a state structure, and see how selectors would work.

export const JobsFeatureReducer: ActionReducerMap<JobClientState> = {
  jobs: jobs.reducer,
  jobrelations: job_relations_entity.reducer,
  jobactions: job_actions_entity.reducer,
  jobfiles: job_files_entity.reducer,
  jobdispatch: job_dispatch_entity.reducer,
  jobperson: job_person_entity.reducer,
  jobquery: jobqueryreducer,
  jobprimary: primaryjobreducer,
};

and

export interface JobClientState {
    jobs: jobs.State;
    jobrelations: job_relations_entity.State;
    jobactions: job_actions_entity.State;
    jobfiles: job_files_entity.State;
    jobdispatch: job_dispatch_entity.State;
    jobperson: job_person_entity.State;
    jobquery: JobQuery;
    jobprimary: string;
   
}

Everything except the Query and JobPrimary are Entity State. 

export interface JobQuery {
    customer: string;
    jobnumber: string; //tid
    status: string;
}

JobClientState is a feature state, so any selector for this starts with

export const getJobState = createFeatureSelector<JobClientState>(jobFeatureKey);

Remember, this.store.select(selector) passes the entire state tree to the selector. If this wasn't a feature state, the first selector would look like this

export const getJobState = createSelector((state: AppState) => state=> state.Job)

where "Job" is jobFeatureKey. 

Where do I put this? The file structure of where you place your selectors is important. In this instance, the data from one entity will be needed by another to assemble the data, and if you aren't careful you can create a circular dependency between files. The solution is to build a tree. Create a file for each Entity or state property. Then create a file where the different entities or properties are combined. A tree.

Let's start with the jobs state selectors. This is an Entity state, which exposes selectors for the data. Entity state looks like this: 

{ ids: string[], entities: Dictionary<T>}

Dictionary is an object. You access the data with the id, and ids is an array of id. The id is derived from the object itself; you need a unique id for each entity. 

entity[id]

Entity selectors are generated by the schematic, and look like this

export const {

    selectIds,

    selectEntities,

    selectAll,

    selectTotal,

} = adapter.getSelectors();

With the feature selector, and the Entity selectors, we can then combine selectors and drill down to the data we want. So for the job state:

export const getJobs = createSelector(getJobState, (state) => state.jobs);

export const getJobIds = createSelector(getJobs, jobs.selectIds);

export const getJobEntities = createSelector(getJobs, jobs.selectEntities);

Each of the JobClientState properties that are Entity State will have the same type of selectors.

What about the Query and Primary states? Query is for the list of jobs to be displayed for the user to select. JobPrimary is the selected job, and has a similar selector.

export const getJobQuery = createSelector(getJobState, (state) => state.jobquery);

When a job is selected, the user navigates to an edit or view url, with the id. The router state is subscribed to, and the primaryjob state is set with that id. The view then uses this selector to get the selected job

export const getJobPrimary = createSelector( getJobState, (state) => state.jobprimary)

export const JobPrimaryEntity = createSelector( getJobPrimary, getJobEntities, (primary, entities) => entities[primary]);

As you can see, the JobPrimaryEntity returns the selected Job using the id as the property of the Job entity.

The other selectors use the primaryid to get the related files, actions, people, etc to build out the job view. To get the list of files attached to a Job.

export const JobPrimaryFiles = createSelector(
  getJobPrimary,
  getJobFileIds,
  getJobFileEntities,
  (primary, ids, entities) =>
      ids.filter(id => entities[id].jobid === primary).map(id => entities[id])
)

ids is an array, so you filter by the jobid, then map the list of ids to a list of entities. That gives a list of objects containing the job files to the component to render.

To limit the number of null checks required, make sure that you set up an initial state that will work if passed to the selectors.

If you have other states, such as contacts, or invoices that are connected to a job. Imagine you have a list of people connected to a job. The job state has this list in the form of contact ids. You can use the selectors from the Contact feature state to get this list of people connected to the job. All you have to do is ensure that the required contacts are loaded in the contacts state for the selectors to work.

Why so many selectors? In one word, memoization. The last computed value is stored, meaning the selector observable will not emit unless there is a change in the data. From the app state to each property, it is compared to the previous values, and will not emit unless it is different. What that means is that you can change a value in your contact state and the job selectors will not emit. 

I would suggest to not take shortcuts. Write out all the selectors and compose them. You will find that your code is robust to the inevitable refactors and spec changes that happen though the app development cycle. And it will perform well.

I have a simple rule that I follow with Ngrx and Angular; if I'm standing on my head to get something to work, I'm doing it wrong. If your selectors are complex and fragile, consider restructuring your state layout.



Friday, April 02, 2021

NGRX Effects

 "In computer science, an operation, function or expression is said to have a side effect if it modifies some state variable value(s) outside its local environment, that is to say has an observable effect besides returning a value (the main effect) to the invoker of the operation."

In the Redux and Ngrx pattern, application state is modified one way only. An action is dispatched, and the reducer function returns a modified state. And action that does something other than that is a side effect, and in Ngrx, is handled by an Effect.

To work with Ngrx Effects you need two things.

  1. An understanding of how Effects work and what they can do.
  2. The ability to work with Rxjs and it's operators.
Lets start with Effects.

getinvoices = createEffect(() => 
    observable.pipe(),
    {dispatch: boolean})

An Effect subscribes to an observable, and merges the result into the Action stream. Or not, if you set dispatch to false.

That's it.

Not quite. There is initializing and some lifecycle hooks. https://ngrx.io/guide/effects

The magic happens in the Rxjs observable stream.

The most common usage is to subscribe to the action stream. This is what that looks like

import { Actions, createEffect, ofType } from '@ngrx/effects';

@Injectable()
export class AuthenticationEffects {
  constructor(private actions$: Actions) {}

  loginsucces$ = createEffect(() =>
    this.actions$.pipe(
      ofType(LoginSuccess),
      ...

The Actions observable stream is injected into the class, and this.actions$ is subscribed to in the effect. That means that all the actions dispatched in the app are emitted by the this.actions$ observable. 

To listen to a specific action, we could do something like this:

filter((action: Action) => action.type === 'the action type we want to find')

But we are provided with a far better solution, the ofType operator. We can list all the actions we want to listen to, or one. In this example, we are watching for LoginSuccess action. If the filter returns false, the observable chain stops, and waits for the next one.

What would you do if a user successfully logged in to the app? That action would likely have the user name, maybe an email address. So a reducer would listen for that action and update the authentication state. But there may be other side effects.
  • Navigate to a route.
  • Initialize a websocket service.
  • Fetch roles and permissions for that user from the api.
  • Load some common data for use in the app.
  • Notify the user.
Let's see how this would be done, and see some examples of RXJS operators that would be used.

loginsucceswebsocket$ = createEffect(() =>
    this.actions$.pipe(
        ofType(LoginSuccess),
        tap(() => this.wsService.initialize()
   ), {dispatch: false}
)

When logged in, initialize a service, in this case websocket. An action will not be merged into the action stream. This observable uses the tap operator, let's look at what it does.

Here we have what could be called nested side effects. Effects are a side effect to the action dispatch => reducer => modify state path. Tap is a side effect of the observable stream. Actions => filter the type => do something else without affecting the stream. Tap receives the data emitted, and allows you to do things without affecting the data stream. The next operator in line will receive the same data.

Observable streams seem like a black box full of magic. Tap exposes what is happening. This snippet inserted in the stream will tell you the data and gives a window into the black box.

observable.pipe(
   tap( value => console.log('value emitted', value,
        error => console.log('error', error),
        () => console.log('observable complete'))
)

Another use of tap is to navigate to a route. On login, I check to see if it is a new user or first use, and route to an initialize component to fill in some initial data. Note the use of the data contained within the action.

loginsuccessinit$ = createEffect(() =>
    this.actions$.pipe(
        ofType(LoginSuccess),
        filter(action => action.connectionstate.user.firstrun),
        tap((action: Action) => {    
          this.router.navigate(['initialization']);
          })
   ), {dispatch: false}
)

When the user logs in, some data is loaded into the state for use across the component tree. 

loginsuccessloaddata$ = createEffect(() =>
    this.actions$.pipe(
        ofType(LoginSuccess),
        map(() => LoadContacts())
    )
)

Notice this effect emits an action that is merged into the Action stream. The map operator is used here to change the value of the data in the stream. 

This is similar to Array.map() function, where each item in the array can be modified and returned. The observable map is passed each value that is emitted, and it returns the new value. You can confuse yourself to no end by doing something like this

const myarray = [ 1, 2, 3 ]
const arrayobservable = of(myarray).pipe(  // emits the array
            map((items: number[]) => items.map(item => item * 100))
).subscribe(value => console.log(value))  // [ 100, 200, 300 ] one emission

const arrayitemobservable = from(myarray).pipe(  // emits each item of the array
            map((item: number) => item * 100)
).subscribe(value => console.log(value))  //   100, 200, 300 , three emissions

You get the idea. map allows you to modify the data and return it. The next operator will see the new data.

What if we want to map the values from an observable, like an HttpClient call?

You would want to flatten the emissions of the inner observable, and merge them into the stream. There are four mergeMap variants. mergeMap, concatMap, switchMap and exhaustMap. With one emission they are functionally the same.

outerobservable$.pipe( mergeMap(value => this.httpclient(params)))

will map the value to what the inner observable emits in this case the httpclient call, and merge it back into the stream. The difference between the four are what happens when the outer observable emits a second value before the inner observable is finished. Think backpressure; there is data coming down the pipe that needs to be handled.
  • mergeMap will run the observable in parallel when the new emission arrives, merging the values of all of them into the stream as each one is completed.
  • switchMap will cancel the running inner observable and run the new one.
  • exhaustMap will throw away any new incoming emissions until the inner observable is completed.
  • concatMap will queue all the incoming emissions, doing them in sequence, letting each one  complete before running the next.
If you are getting the feeling that you need to know RXJS to use Effects effectively, you are right. There is one more operator you need to know, and you will use it when doing an httpclient call.

addrelations$ = createEffect(() => 
   this.actions$.pipe(
            ofType(LoadContacts),
            mergeMap(action => this.dataService.queryData(action)
                .pipe(
                    map(contacts => LoadContactsSuccess({contacts: contacts['Contacts']})),
                    catchError(e => of(RawErrorHandler(e)))
                )));

I've encapsulated the httpclient call in a service method that returns the observable. There are a few details that are important here.

The inner observable has a pipe. You can nest piped operators to your hearts content, but this one has a specific purpose. It could be written like this:

addrelations$ = createEffect(() => 
   this.actions$.pipe(
            ofType(LoadContacts),
            mergeMap(action => this.dataService.queryData(action)),
            map(contacts => LoadContactsSuccess({contacts: contacts['Contacts']})),
            catchError(e => of(RawErrorHandler(e)))
   )
);

The http call response gets merged back into the stream, mapped to the desired shape in an Action, and the error handled.

The only problem is that the error would end the subscription to this.actions$, and your action would work once. It is possible to configure Ngrx so this gets resubscribed, but this is important to know if you have long and deep observable sequences.

Rxjs makes easy things hard and hard things easy. Same with Effects. An app that does a simple api call and renders the results gets complicated with the Action => Effect => reducer => selector cycle. But if you have multiple data sources, multiple components rendering the data, complex component to component interactions, things like autosave and undo-redo, network fault tolerance etc. many of these complicated things become easy, almost trivial with the power and flexibility of Ngrx. But you need to be comfortable using Rxjs.


Thursday, March 25, 2021

NGRX Actions

 export interface Action { type: string }

This is what defines an Action. 

Ngrx implements a message passing architecture, where Actions are dispatched.

this.store.dispatch(action)

The mechanism behind this is an ActionSubject, which is a BehaviorSubject with some extensions. When you dispatch an action, it is as simple as this.

this.actionsObserver.next(action);

The listeners subscribe to this action stream, either in the reducers which modify the state, or an Effect. This simple structure allows you to build a message passing system which define the data flows in your application.

Here is a list of some things you need to know:

A well designed message passing system will clearly define the paths of execution and transfer of data. I find the easiest way of seeing this in my applications is to log out the action from a reducer.

export function reducer(state: State | undefined, action: Action) {
  console.log(action.type);
  return emailReducer(state, action);
}

You will see the flow of commands and data as your application goes through it's function. 

Broadly, there are two types of Actions. Those that the reducers listen for to modify the state, and those that don't, and are listened for in Effects. Reducers are pure functions, and when something asynchronous or a side effect needs to occur, the Effect will listen. The typical example goes like this

  1. Component dispatches an action to load data.
  2. an Effect listens for the load action, and does an api call to fetch the data.
  3. the Effect emits an action with the data attached.
  4. the reducer listens for that action and updates the state.
Message passing systems can easily get out of hand. I think the key is to use a declarative approach. An example of a declarative system that easily translates into actions is a file upload component that I wrote. It has the input to select a file or files, a table to contain the list, and buttons to upload them individually or all of them.
  • selected file or files are inserted in the state
  • a file or all files can be removed from the state
  • a file or list of files are uploaded
  • the upload progress is displayed
  • success clears the state
  • an error sets the status
This is how it is translated into Actions

export const InsertFile = createAction(
  '[UploadTableComponent] Insert file',
  props<{ selectedfile: SelectedFileList }>()
);

export const RemoveFile = createAction(
  '[UploadTableComponent] Remove file',
  props<{ selectedfile: SelectedFileList }>()
);

export const SetFileStatus = createAction(
  '[SelectedFileList Effect] Set File Status',
  props<{ selectedfile: SelectedFileList, status: string }>()
);

export const SetFileProgressStatus = createAction(
  '[SelectedFileList Effect] Set File progress Status',
  props<{ selectedfile: SelectedFileList, progress: number }>()
);

export const SetFileProgressComplete = createAction(
  '[SelectedFileList Effect] Set File progress Complete',
  props<{ selectedfile: SelectedFileList }>()
);

export const SelectAllFiles = createAction(
  '[UploadTableComponent] Select All',
  props<{ id: string, module: string, selectall: boolean }>()
);

export const SelectFiles = createAction(
  '[UploadTableComponent] Select File',
  props<{ selectedfile: SelectedFileList }>()
);

export const UploadFiles = createAction(
  '[UploadTableComponent] Upload File',
  props<{ selectedfile: SelectedFileList }>()
);

export const UploadSelected = createAction(
  '[UploadTableComponent] Upload Selected',
  props<{ selectedfile: SelectedFileList, process: string }>()
);

In this component the file select input dispatches the InsertFile action. Buttons on the file list table can either remove or upload an individual file, or remove or upload the whole list. The httpclient emits actions to update the progress and completion of the upload, or notify of an error.

If you lay out a declarative description of your component, it makes the flow clear and understandable, and it is easy to translate into actions. All that is then required is to define the data that gets passed around. Then when the component gets executed, logging the actions will match the declarative description, or expose flaws in the implementation that can be fixed.

The goal of the NGRX pattern is to make complicated data flows and interactions easy to understand. Self documenting Actions, along with a well thought out declarative spec for the flows and interactions will lead to a successful implementation.

Saturday, March 20, 2021

NGRX Normalization

Application state is the data required to render the views over time.

One of the conceptual difficulties that makes Ngrx difficult is how to structure the state. There is no one answer because the source, usage and modification of the state is different in every app.

This is how I approach it.

The path between the api and components represents a series of Ngrx actions and functions. This is one direction of the data flow.

  1. The component dispatches an action
  2. An effect watches for that action and runs an http call that fetches the data
  3. On success a second action is dispatched containing the data
  4. The reducer responds to the action and updates the state
  5. A selector emits the data in a shape useful for the component
  6. The component renders the view.
The first three are simply the mechanics of sending a command and having it execute. The last two, the selector and component render are what determines what you do in 4, the reducer.

Something I've been working on recently illustrates the challenge. I have a component with a map, a table and a day selection component for viewing the locations and routes in a workday of a service tech. The data comes from gps logs, in different formats and different levels of detail. Some are simply a list of time and locations, others are the result of analysis and have lists of routes and places.

Much of the grunt work of assembling the data is done on the server, and the client gets an array of routes and stop points. The goal of the component is to come up with timesheet data, travel distances for expenses, and billing information for the services rendered; when and where and how long.

The three components render different aspects of the data. 
  • The day selection component renders the selectedday, which comes from the router url. 
  • The map renders routes and stop points, using the map functions and classes.
  • The table lists the same routes and stop points, with duration, distance, at a specific location identified as an address and/or business location.
The selectedday is driven by the UI; the user selects the day or passes the day via the url. A change of day dispatches an action which fetches the location data for that day.

The map selector gets the list of routes and places, builds the map classes. The component subscribes and attaches the UI callbacks. 

The table selector puts together a list extracted from each location item in the state, with duration, distance and any other data that is available.

Imaging the data flowing and being modified from the server => effect => reducer => state => 3 selectors. If the reducer processed the data into the map classes, the table wouldn't get what it needs. Same if the reducer processed the data into the shape that the table required. There is a point in that line of modifications before it branches for the three selectors, and that is what you want your reducer to put in the app state.

If you are only viewing the data, this point is usually quite simple to figure out. But what if you are editing the data?


Wednesday, December 16, 2020

State as Observables, State as Ngrx.

Observables and Ngrx are complex. As with any technology, it is very very easy to forget what you are trying to accomplish as you wade through the details.

Start and end by thinking "What do I want to accomplish".

These tools are capable of taking a very complex problem and simplifying it. That has been my experience.

But they are also capable of taking a simple situation and making it very complicated. 

Start with defining the State. It is the data the view needs to render over time. How would you think about this problem.

Where is the data coming from? Usually an api. 

What does the data look like from the api? Usually not what you need for the view, so the observable chain or the reducer functions would take this maybe complex tree and transform it into what your view needs. 

How do I know what the data looks like? Tap is your friend. tap(value => console.log('note from where', value) in the observable chain tells you the shape. As you change it, use a tap to verify.

What shape do I want? Flat and simple. <div *ngFor="let item of items$ | async> should give you an item that can be passed to a component for viewing or editing. So either in the effect, observable chain or reducer, transform the data into that shape.

If you are fighting with nested arrays and complex objects, make it simple. Create a relation key scheme using Entities so that the selectors are easy and fast. A one time cost of insertion vs. the every time you subscribe cost of transformation.

With most complex technical issues, framing a question is often the most difficult thing. The question here is what should my ngrx selector or observable chain emit to make my component simple? When you have answered that, the specific details of how to construct the chain, reducer, selector etc. becomes a matter of coding and testing.

What do I want to accomplish? What is the shape of the data I need? 


Wednesday, November 13, 2019

The Secrets of Docker Secrets

Most web apps need login information of some kind, and it is a bad idea to put them in your source code where it gets saved to a git repository that everyone can see. Usually these are handled by environment variables, but Docker has come up with what they call Docker secrets. The idea is deceptively simple in retrospect. While you figure it out it is arcane and difficult to parse what is going on.

Essentially the secrets function create in memory files in the docker image that contain the secret data. The data can come from files, or a Docker swarm.

The first thing to know is that the application running in the docker image needs to be written to take advantage of the Docker secrets function. Instead of getting the password from an environment variable, it would get the password from the file system at /run/secrets/secretname. Not all images available use this functionality. If they don't describe how to use Docker secrets, the won't work. The files will be created in the image, but the application won't read them.

For a development setup having files outside of the git source tree works well. To create a file with a secret, I created a folder called serverdata, with a dev/ and prod/ folder within. In the dev/ folder, run this command with all the secret data you will need:

echo "shh, this is a secret" > mysecret.txt
The names simply need to tell you what they do. What the secret is called in the image is set in the docker configuration. This is what my dev/ folder looks like:
-rw-r--r-- 1 derek derek 66 Nov  5 14:49 mongodb_docker_path
-rw-r--r-- 1 derek derek  6 Oct 22 14:09 mongodb_rootusername
-rw-r--r-- 1 derek derek 13 Oct 22 14:08 mongodb_rootuserpwd
-rw-r--r-- 1 derek derek 18 Oct 22 14:10 mongodb_username
-rw-r--r-- 1 derek derek 14 Oct 22 14:10 mongodb_userpwd
-rw-r--r-- 1 derek derek 73 Oct 22 14:02 oauth2_clientid
-rw-r--r-- 1 derek derek 25 Oct 22 14:02 oauth2_clientsecret
-rw-r--r-- 1 derek derek 14 Oct 22 14:03 oauth2_cookiename
-rw-r--r-- 1 derek derek 25 Oct 22 14:04 oauth2_cookiesecret
-rw-r--r-- 1 derek derek 33 Oct 26 08:27 oauth2_redirecturl

Function and description. I have some configuration details as well.

Using Secrets with docker-compose

This is the docker-compose.yml that builds a mongodb image with all the configuration.
version: '3.6'services:
  mongo-replicator:
    build: ./mongo-replicator
    container_name: mongo-replicator
    secrets:
      - mongodb_rootusername
      - mongodb_rootuserpwd
      - mongodb_username
      - mongodb_userpwd
    environment:
      MONGO_INITDB_ROOT_USERNAME_FILE: /run/secrets/mongodb_rootusername
      MONGO_INITDB_ROOT_PASSWORD_FILE: /run/secrets/mongodb_rootuserpwd
      MONGO_INITDB_DATABASE: admin
    networks:
      - mongo-cluster
    depends_on:
      - mongo-primary
      - mongo-secondary
 
And the secrets are defined as follows:
secrets:
  mongodb_rootusername:
    file: ../../serverdata/dev/mongodb_rootusername
  mongodb_rootuserpwd:
    file: ../../serverdata/dev/mongodb_rootuserpwd
  mongodb_username:
    file: ../../serverdata/dev/mongodb_username
  mongodb_userpwd:
    file: ../../serverdata/dev/mongodb_userpwd
  mongodb_path:
    file: ../../serverdata/dev/mongodb_docker_path
The secrets: section reads the contents of the file into a namespace, which is the name of the /run/secrets/filename. Mongo docker image looks for an environment variable with the suffix _FILE, then reads the secret from that file in the image file system. Those are the only two variables supported by the Mongo image.

Of course it gets more complicated. I wanted to watch the changes in the database within my node application for various purposes. This function is only supported in a replicated set in Mongo. To fully automate the configuration and initialization of Mongo within Docker images using replication requires a second Docker image that waits for the Mongo images to initialize, then runs a script. So here is the complete docker-compose.yml for setting up the images:
version: '3.6'services:
  mongo-replicator:
    build: ./mongo-replicator
    container_name: mongo-replicator
    secrets:
      - mongodb_rootusername
      - mongodb_rootuserpwd
      - mongodb_username
      - mongodb_userpwd
    environment:
      MONGO_INITDB_ROOT_USERNAME_FILE: /run/secrets/mongodb_rootusername
      MONGO_INITDB_ROOT_PASSWORD_FILE: /run/secrets/mongodb_rootuserpwd
      MONGO_INITDB_DATABASE: admin
    networks:
      - mongo-cluster
    depends_on:
      - mongo-primary
      - mongo-secondary

  mongo-primary:
    container_name: mongo-primary
    image: mongo:latest
    command: --replSet rs0 --bind_ip_all
    environment:
      MONGO_INITDB_DATABASE: admin
    ports:
      - "27019:27017"    networks:
      - mongo-cluster
  mongo-secondary:
    container_name: mongo-secondary
    image: mongo:latest
    command: --replSet rs0 --bind_ip_all
    ports:
      - "27018:27017"    networks:
      - mongo-cluster
    depends_on:
      - mongo-primary
The Dockerfile for the mongo-replicator looks like this:
FROM mongo:latest
ADD ./replicate.js /replicate.js
ADD ./seed.js /seed.js
ADD ./setup.sh /setup.sh
CMD ["/setup.sh"]
Mongo with various scripts added to it. Here they are.

replicate.js
rs.initiate( {
  _id : "rs0",  members: [
    { _id: 0, host: "mongo-primary:27017" },    { _id: 1, host: "mongo-secondary:27017" },  ]
});
seed.js
db.users.updateOne(
    { email: "myemail@address.com"},    { $set: { email: "myemail@address.com", name: "My Name"} },    { upsert: true },);
and finally what does all the work, setup.sh
#!/usr/bin/env sh
if [ -f /replicated.txt ]; then  echo "Mongo is already set up"else  echo "Setting up mongo replication and seeding initial data..."  # Wait for few seconds until the mongo server is up  sleep 10  mongo mongo-primary:27017 replicate.js
  echo "Replication done..."  # Wait for few seconds until replication takes effect  sleep 40
  MONGO_USERNAME=`cat /run/secrets/mongodb_username|tr -d '\n'`  MONGO_USERPWD=`cat /run/secrets/mongodb_userpwd|tr -d '\n'`

mongo mongo-primary:27017/triggers <<EOFrs.slaveOk()use triggersdb.createUser({  user: "$MONGO_USERNAME" ,  pwd: "$MONGO_USERPWD",  roles: [  { role: "dbOwner", db: "admin" },            { role: "readAnyDatabase", db: "admin" },            { role: 'readWrite', db: 'admin'}]})


EOF

  mongo mongo-primary:27017/triggers seed.js
  echo "Seeding done..."  touch /replicated.txt
fi
 In the docker-compose.yml the depends_on: orders the creation of images, so this one waits until the others are done. It runs the replication.js which initializes the replication set, then waits for a while. The password and username are read from the /run/secrets/ file, the linefeed removed, then the user is created in the mongo database. Then seed.js is called to add more initial data

This sets up mongoDb with admin user and password, as well as a user that is used from the node.js apps for reading and writing data.

No passwords in my git repository, and an initialized database. This is working for my development setup, with a mongo database, replicated so that I can get change streams, and read and write function from the node.js application.

More to come.

  1. Using secrets in node.js applications and oauth2_proxy
  2. The oauth2_proxy configuration
  3. Nginx configuration to tie the whole mess together

Tuesday, November 05, 2019

Angular in Docker Containers for Development

I've been using the Google login for authentication for my application. The chain of events is as follows:

  1. In the browser a Google login where you either enter your account information or select from an already logged in Google account.
  2. The Google login libraries talk back and forth, and come up with a token.
  3. The app sends the token to the node application, where it verifies it's validity, extracts the identification of the user, verifies against the allowed users, then responds with the authentication state to the app in the browser.
  4. The angular app watches all this happen in a guard, and when you are authenticated routes to wherever you wanted to go.
It all works fine, but I was running into two issues. 
How do you authenticate a websocket connection? I wrote the logic where the token was sent via socket, and the connection is maintained if the token is valid. But I don't trust my code when it comes to security.
The second issue is that the normal garbage traffic that hits any server gets a large app bundle, putting an unnecessary load on the server. Even if you lazy load and start with a simple log in page, the bundle is not insignificant.

I was forseeing complications as I built out my app. I wanted security to be simple, audited by people who know, covering websockets and api calls, and not being a burden on the server.

I ran across an application called oauth2_proxy, which seems to solve my problem. You put your application and all the api routes behind this proxy, which authenticates via the numerous oauth2 services available, including Google.

I set it up and got it working, then realized that I needed something very similar to my server deployment on my development machine. Knowing from experience, the setup of these things are complex and long, and I wanted to figure it out once, then change a few things and have it ready for deployment. Docker came to mind, partly because the oauth2_proxy has a docker image.

So my structure is as follows. I have it basically working, no doubt I'll find a bunch of issues, but that is why I wanted it on a development machine. I'm using docker-compose to put the thing together, and the goal is to have it ready to go with one command.

  1. Nginx as a front facing proxy. The docker image takes a configuration file, and it routes to the nodejs api applications, websockets and all the bits and pieces.
  2. Oauth2_proxy for authentication. I'm using the nginx auth_request function where a request comes into nginx, and on the locations needing authentication it calls oauth2_proxy then routes either to a login page or the desired route.
  3. Nestjs server application that handles the api calls
  4. A second nodejs application that does a bunch of work.
  5. A third nodejs application that serves websockets.
  6. Mongodb as the data store. The websocket microservice subscribes to changes and sends updates to the app in the browser.
  7. For development, I have a docker image which serves the angular-cli ng serve through nginx. The nodejs applications are also served the same way, meaning they recompile when the code is changed.
So how does it look? I'll go through this piece by piece. There were some knarly bits which swallowed too much time with a dastardly simple solution obvious only in retrospect.

Setting up a MonoRepo with Nx

When I started poking around with this idea I found that the structure of my application was lacking. Things like shared code between Angular and Nestjs, and the serve and build setup for the node applications didn't work very well. A very nice solution is the Nx system. It required a bit of work and thought to move things around, but in the end I have a setup where ng serve api starts the node application in development mode. https://nx.dev/angular/getting-started/getting-started shows how to install the system. When you install it will ask the structure of your application, I selected angular with nestjs backend. It creates a skeleton that is very nice.

Running Angular Cli in Docker

This is really neat. Here is the Dockerfile.

FROM node

ENV HOME=/usr/src/app
RUN mkdir -p $HOME
WORKDIR $HOME

RUN npm -g install @angular/cli@9.0.0-rc.0

EXPOSE 4200

USER 1000

Put this Dockerfile in the same directory as the package.json file in the Nx structure. I call it Dockerfile.angular, since I have many dockerfiles there.

Then in a docker-compose.yml file, the docker-compose configuration,

angular:
  container_name: angular
  build:
    context: .
    dockerfile: Dockerfile.angular
  ports:
    - "4200"  volumes:
    - .:/usr/src/app
  command: ng serve --aot --host 0.0.0.0

The volumes: statement lets the docker image see the current directory, then you run ng serve and it serves the application. I'm using it from an Nginx proxy, so the port is only seen from the docker network. You might want to expose it 4200:4200 to use it without Nginx.

The node applications are identical except for the Dockerfile EXPOSE statement where I set the value to the port that the Nestjs is watching. And instead of ng serve, this is what the docker-compose.yml looks like.

scripts:
  container_name: scripts
  build:
    context: .
    dockerfile: Dockerfile.scripts.dev
  ports:
    - "3333"  volumes:
    - .:/usr/src/app
  command: ng serve scripts
  depends_on:
    - mongo-replicator
  secrets:
    - mongodb_username
    - mongodb_userpwd
    - mongodb_path
ng serve scripts runs the node application. There are a couple things here that I will get into in future posts.

ng serve --aot --host 0.0.0.0 This is one of the sticky things I had to figure out. The default host is localhost, and the websockets for live reloading the app in the browser won't work unless you set this correctly.

More to come.

  1. Docker secrets and using them in the various images
  2. Setting up Mongo.
  3. The Oauth2_proxy configuration
  4. Nginx configuration



This page is powered by Blogger. Isn't yours?

Subscribe to Posts [Atom]