IIS and the Event Log

The event log has a series of errors that I’m trying to work out.  Here are some assorted notes:

Rackspace was very helpful and recommended I tackle the WAS errors which are coming from the the asp.net clr.  But I can see no details.  I am offered the chance to open JIT compiler in Visual Studio – but I do not have VS installed on the server (need to turn off server-side debugging in web.config).  Noted that sChannel errors can often be ignored unless tied to other specific events at same time.    Recommended looking at Event Viewer Security for repeated failed login attempts (hack attempts).

Recommended using the event viewer log to isolate error times, and then review the IIS logs for errors at that time.   Also recommended failed request trace.

They also recommended decreasing my Appool recycle time (RegularTimeInterval in minutes) to about 12 hours (720 min) from the default of 29 hours (1740 min), and to   Reboot server monthly.

No recommendations for any good log analyzers, but I found the log parser which allows for sql statement queries on the log files via a console app or powershell.  There is also a Log Parser Studio which adds a GUI.  An example query: SELECT * FROM ‘[LOGFILEPATH]’ where time >=’11:58:00′ and time <= ’12:00:00′.   The LogFilePath is set before as a single file or collection of files.  Analysis can be for any number of file types from IISW3CLOG to csv files.

There is also Debug Diagnostic Tool which apparently takes some set-up. Have not tried this yet.  It apparently helps troubleshoot hangs, slow performance, memory leaks…  There is a summary on the basics of setting up debugging.


			
Posted in Uncategorized | Comments Off on IIS and the Event Log

Identity and the Client

Login has always seemed like an after thought.  As a solo developer, projects grow from shiny new, interesting features and technologies.  Identity, authentication (logon) and authorization (roles) are tacked on later.  Identity is boring.

Yet identity seems to be the critical foundation on which the rest of the application relies, and it is really, really difficult to get right (as evidenced by nearly daily reports of website hacking).  I have never written any mission critical applications (e.g. medical devices) or those requiring tight security (e.g. patient privacy, banking).  But I still need a reasonably secure login.  In the 2000-2010, I used asp built in asp identity solutions and later asp.net MVC.  They have instant, built-in support for identity databases in the framework, as well as external authentication (e.g. Google, Facebook, Microsoft).  There are even anti-forgery tokens, sterilization of input data (to prevent sql injection attacks) and other security measures.  Easy to use – click a checkbox or dropdown on the new MVC project, and it just works.  Magic.

But that is, I am told, not the 2016 way to do things, especially not for working with un-tethered javascript client frameworks (e.g. angular).   These clients should call apis on the server with token-based authentication and authorization.  Tokens are  hard.  Maybe not so hard if you use Auth0  which looks like a very nice solution.  But that is not the direction (or maybe better described as layer of indirection) I want to travel.

Server and client development in 2016 seems to be cluttered by utilities/services, that for a price, will replace programming challenges with magic boxes:  key, challenging functionality wrapped  in shiny packages with catchy names.  I send data in and get data out.  What goes on inside is mysterious.   Libraries from Nuget or NPM do this as well (e.g.  newtonsoft,  lodash), but at least with libraries I understand the basics of what occurs inside.

In contrast, cloud computing (e.g. Azure, AWS) despite all its great conveniences including spinning-up 1 or 50 servers throughout the world, is in large part composed of magic boxes.   For my needs, “owning” the whole server still makes sense.  And the costs of adding magic boxes, cloud computing, OAuth, Firebase… could quickly add-up.

So, in the last month I delved into identity server 4 (built on asp.net core).    In the recommended configuration, Identity Server is given its very own asp.net core project.  It can use MVC controller/views to login in a user and access their roles/claims.  Identity server defines clients with scopes (e.g. openId, Profile, “api”) as well as users.

What is incredible is that the primary developers,  Dominick Baier and Brock Allen have set-up an entire ecosystem for identity, along with  great tutorials, documentation and many samples and starter projects. I’ve worked through the tutorials and have used the Quickstarts starter project #6 – which in turn uses asp.net identity database and it works well.  Using external providers (e.g. Google, Microsoft, Facebook) also works, but I am still trying to reconcile username/password standard logins with multiple external providers.  In effect, how to keep a consistent set of claims for the same user who authenticates today with a username, tomorrow via Google, and the next day via Microsoft.  Kevin Dockx covers some approaches to this in his Pluralsight course.

You’ll want to work through the Identity Server tutorials first – the setup is not intuitive.   One project/asp.net core is the identityserver, a second is the api “client”, and the third is the javascript app or MVC app.  When a user needs to login in for the javascript or MVC app, they are transferred to identity server for login.  Once login occurs, a token is created to allow the end user javascript or MVC app access to the web api.

Testing in postman is fairly straight forward using this workflow.   You post to the [identityserver_URL]/connect/token with body data (x-www-form-urlencoded) with client_id, client_secret, grant_type, scope, username and password.  These are all defined in identityServer config except for username and password.  However the grant_type text is tricky.  With trial and error, I found “password” for resourceOwnerFlow and “client_credentials” for clientCredentialsFlow;  Then I found that valid grant types are listed at  “//localhost:5000/.well-known/openid-configuration”.  Once you post to the token service, an access token is returned and that can be copied into the api call header to allow authorization. The access token parts can be visualized by pasting it at the jwt.io site.

Everything worked well until I attempted deployment.  Then I became stuck on an error – which started as a 500 – internal server error.  Once I added logging to file (NLog Extensions – look for the resulting log files at c:\temp\nlog…), I found  IDX10803: Unable to obtain configuration; it could not open the /.well-known/openid-configuration.  This worked on the dev machine with localhost:5000, but on the server, I needed to use a full URL.  This was easy enough to configure  using appSettings.production.json, but it was not finding the file.  After spending hours, it turned out that I was trying to use //myUrl/identityServer and it would not work – the URL was not being found.  Instead, I needed to use  https or http//myUrl/IdentityServer.

One additional issue that also took me several hours to figure out.  The IdentityServer example projects use “services.AddMvcCore”, not “services.AddMvc”.  As I learned, AddMvcCore is a barebones subset  of the framework.  This worked fine until I started to add additional functionality, such as swashbuckler.swagger, a great Api helper utility, and while following examples, I could not get it to work.  Finally, once I changed the configuration in startup to “services.AddMvc”,  all worked.

There are multiple road blocks to understanding IdentityServer,  OAuth2 and OpenID in Asp.Net.  As of November 2016, Pluralsight has 2 courses.  Both use Asp.net (not core) and Identity Server 3 (not 4), but the overall concepts, flows/grants, clients, scopes, users – are the same as for the newer versions. I started by watching the “OpenId and Oauth2 Strategies for Angular and Asp.Net” – and this is a very thorough and in depth but quite overwhelming; rather than an introduction, it is more of a “everything you ever wanted to know about IdentityServer”.    I then watched “Using Oauth to Secure Your Asp.Net Api” which was more geared at an introductory level and easier for me to get my head around.   In retrospect, I would watch this first before the other.  He does however recommend using ResourceOwnerPassword for Xamarin Mobile apps authenticating with identity server and this may be insecure due to transfer of client username and password; I think authentication flow may be better for this.  ResourceFlow appears to be ok for api since the API is on the same server as the Identity Server.

It took me a while to understand that access-tokens are just for access.  They do not include user claims (e.g. username, email…).  To get user claims, you need an id-token or you can tack on claims to the access token, but this requires the ResourceOwner flow.   In addition, understanding how roles fit into the new claims world was not intuitive, but is explained here.

It took me days to figure out how to use Asp.net core identity (and EF/SQL database) with Identity Server.  Most of this is well layed out here.  But what took me the longest time, was in getting the username (user email) and role (e.g. admin) to be passed as claims to the client Api.  I created an Identity Server client with AllowedGrantType = “ResourceOwnerPasswordAndClientCredentials” flow to do this.   Then create a custom scope that uses the following Claims (as described here):

                 Claims = new List<ScopeClaim>
                        {
                            new ScopeClaim("name"),
                            new ScopeClaim("role")
                        }

As an alternative, you can add “IncludeAllClaimsForUser = true” to the custom scope, but it adds additional claims I do not need.

In the Api, I added the follow to Startup.cs Configure.  This stops default Microsoft jwt mapping that tends to mess a few things up:

JwtSecurityTokenHandler.DefaultInboundClaimTypeMap.Clear();

 

I also added the following lines in the Api to the IdentityServer setup section in Configure:

app.UseIdentityServerAuthentication(new IdentityServerAuthenticationOptions
            {
                Authority = authority,  
                ScopeName = "myscope",
 
                NameClaimType = System.Security.Claims.ClaimTypes.Name,
                RoleClaimType = System.Security.Claims.ClaimTypes.Role
 
            });

 

Now that the Api can identify the username, I can use that in database fields to record the author of entries.  I created a separate UserRepository within the Api, that calls the asp.net identity database to obtain additional user info and can be joined to the other tables.  This is a bit clunky however.  When the api held both a “notes” table and a “user table”, note rows had a author foreign key that pointed to a user in the user table.  Therefore a query for notes could automatically include information about the user.   However, with the user table in another database, there needs to be 2 queries and a join.

As an alternative, I could create a SqlServer View inside the notes database, from the table in the User database (using specific column names instead of the *):

CREATE VIEW [dbo].[vUserMirror]
	AS SELECT * FROM [users].dbo.[aspnetusers];

The users view could then be linked via foreign key to the notes, and  a notes query could once again include user info.   The tough part with this is how to create this automatically with migrations.   Another option is to create a data project that holds all the data files including identity.  Then client projects could use the data project and each could extend ApplicationUserDbContext with their own database implementations.  In this way different client projects could share the same central identity database.  However, it would seem fragile as more tables were added for different client projects – the database would grow, migrations for one client project might break the data for another project.   Still need to work with this.  I have searched for examples online for how to solve this and have not found a solution.

The second problem with this is that now I have two DbContexts inside the project, and migrations are no long as straight forward.   Add-migration inside of Package Manager Console will give a cryptic error: “Unrecognized option ‘–build-base-path'”.  However, if the full non-powershell command is issued (“dotnet ef migrations add mymigration”), the error will now be “More than one DbContext was found. Specify which one to use. Use the ‘-Context’ parameter for PowerShell commands and the ‘–context’ parameter for dotnet commands.”.  So, with a project containing more than one DbContexts, use:

 

 

  dotnet ef migrations add myMigrattion -c myDbContext
  dotnet ef database update -c myDbContext


I’ve been watching as Identity Server 4 rapidly nears a full 1.0 release (as of early November it is in RC3).  I am amazed at the productivity and efficiency of Dominick Baier and Brock Allen, and the others working with them.  It seems every time I access their github repo, they  made their last changes, just hours before.  Their documentation  pages are stellar as are their samples.    I wish their London SDD Conference Workshop allowed for online subscription viewing.

 

 

 

Posted in Uncategorized | Comments Off on Identity and the Client

C# Scripting

C# is my favorite coding language.  It seems crazy to master another language (e.g. Python, Powershell) just to write single file scripts for isolated tasks.    I can get by in these other languages (with the help of stack overflow and google), but scripting in C# would be so much faster and easier. And now C# scripting works very well within Visual Studio Code (with some set-up).

Step 1: Install ScriptCS

Step 2: Install SVM

Step 3: In Visual Studio Code, install the extensions: C# for Visual Studio Code (omnisharp) and ScriptCSRunner.  Enable the extensions.

Step 4: Bug workaround until asp.net core version of Omnisharp  C# supports scripting.  See here

Step 5: Reboot the machine.  Pick a file folder and start VS Code.  Create an empty project.json file containing only: {}.   Create a C# script file with CSX extension.  Type C# code and run it with Ctrl-Shift R.  Select a few lines and run just those with Ctrl-Shift-R.  Instructions are here.

 

So far, importing C# libraries into C# script files while working in VS code is the only problem I’ve run across, and as yet, is unsolved.

Posted in Uncategorized | Comments Off on C# Scripting

The dark side of Asp.Net Core: Deployment – Part 2, Databases

In the last episode of asp.net core deployment, we saw the author struggle and nearly give up with a basic asp.net core deployment to IIS; then he magically slays the evil giant, the app works and all is well again.  Now, days later, we return to find the author, with much less hair (its all been pulled out), struggling with a new foe: asp.net core WITH  SQL Server database deployment.

In this episode, I will spare you the gory details of hair loss, and just highlight the steps.  Before getting started, I will channel Andy Rooney to complain about the Start Icon on Windows Server 12.  The one in the bottom left corner that only appears on highlight.  Why is that?  Why not show the icon without hovering (or at least give the option to show it).  Alright, enough distraction; move along.

Deploying an asp.net core sql server database:

Step 1. Create a basic asp.net core mvc app with identity (individual user accounts).

Step 2. Create a database on the server in sql server.  FYI:  sql server express can be used on the server for small databases (<10 GB).  Setup permissions for the asp.net core “IIS AppPool\[poolName]” or Network Services or both.   The process is described well here.

Step 3: Formulate a sql connection string for the server instance (example here).

Step 4:  In the top directory of the asp.net core app, duplicate the appsettings.json file, and name the copy, “appsettings.production.json”.  In the project.json file, add this filename to the “publishOptions.include” array.  In the “appsettings.production.json” file, change the default connection to the one created in Step 3.  The basics about this are described here.  As a side note, data in the appsettings file can be accessed either with objects, or given json {Identity:{Authority:’MyAuthority.com’}} with:

Configuration.GetSection("Identity:Authority").Value

 

Step 5: Change the startup.cs file to  get an injected instance of the DB Context. Then use this to call the migrate function on the database (described here).  If no migration exists or is outdated, it will be updated; if it is already uptodate, nothing will change.

public  void Configure( ... ApplicationDbContext dbContext)
        {
            ..

            dbContext.Database.Migrate();

            app.UseIdentity();

Step 6: Publish the app in Visual Studio.

Step 7: Navigate your browser to the URL of the deployed app.

It fails (at least it did for me), with a 500 internal server error.  It is not fixed by changing the web.config to allow for full error messages in the browser,nor by changing the Startup.cs Configure method to call app.UseDeveloperExceptionPage() and  app.UseDatabaseErrorPage().  My error, at least,  was too early, occurring during the startup.cs config method; it would therefore not show up in the browser with any level of coercion.

Instead, on the server, open a command window (or powershell window) in the asp.net core application’s root directory and type “dotnet yourapp.dll”.  Now all the diagnostic info is there with a full call stack.  Fix the errors.

Done. Deployed app with database works.  Another giant slayed, another notch in the armor.  Glue your hair back in place, ready for the next frustration.

 

Posted in Uncategorized | Comments Off on The dark side of Asp.Net Core: Deployment – Part 2, Databases

Rapid Real Data

On a recent night I saw a critically ill, hypotensive woman with prior ectopic and periumbilical abdominal pain.

I completed my exam including a FAST scan, two saline lines running open with pressures at 60-70 systolic, and a call to OB as a heads-up.

Serum pregnancy test pending. I had just heard Michelle Lin, MD on EM:Rap talk about rapid bedside whole blood pregnancy test (using a urine pregnancy test kit) based on this article: “Substituting whole blood for urine in a bedside pregnancy test.” which showed as good or better efficacy then urine – in 10 minutes.

I inquired about getting this test at our facility, and the initial response was that this was a “VBI – very bad idea” based on this opinion blog post without references, as well as an informal survey of the decision maker’s peers.

I was floored that I had presented a compelling article for use of a cheap test, good efficacy, and rapid in the face of critical illness, and it was refuted with opinion. This is not isolated. I am excited that our medical practice is more and more based on evidence, and frustrated that the key obstacle to implementation is based not on science, but on preconceived notions and opinion.

This is not a critique of the opinion-based blog. This is a critique of not recognizing blog posts as opinion and how that differs from case controlled studies. I am, myself, statistically challenged, but this fundamental difference I understand. And reader, you should recognize that what I write here is 100% my opinion and you should not use this to base any medical decision. I also understand that research is messy, and that it can easily be misinterpreted. Then focus on the quality of the article and the evidence, and contrast it with similar journal articles.

My wife, who is statistically adept, works in public health for the state and she frequently has conversations about data and evidence that baffle me. My preconceived notion of public health is that they are the purveyors of cohort health data – analysis, interpretation, forecasting, recognition of patterns. Yet she describes a wide variance in the understanding in public health of data acquisition and evaluation (my words not hers) – basic concepts are lacking, and decisions are based on ancient mantras.

Ten years ago when I looked to Cochrane for advice, most of what I found was “further study needed.” Now, that landscape is changing, and I find more and more guidance on what does and does not work when subjected to high quality study. I feel as if evidence based medicine is actually better directing me today.

Posted in Uncategorized | Comments Off on Rapid Real Data

The dark side of Asp.Net Core: Deployment

To host my angular 2 application (fpnAngled) from the last post, I used  standard asp.net framework (4.6).  However, as I was getting ready to implement IdentityServer, I realized that asp.net and angular 2 routing were not working together.  I could not get the html5 URL format in angular 2 to route properly through asp.net, despite changing web.config to include rewrite rules for filename and directory.  The angular app would route well, until the browser window was refreshed – then it would not be able to find dependencies or reload.  So after hours of tinkering, I explored using asp.net core to host the angular app.  In retrospect (see below), I might have seen success with the old asp.net framework by changing the index.html base href.

The new asp.net core framework works very well on development machines.  Great new features, simplified structure, nice integrated web server – all in a small package.  As usual, Visual Studio works well for core development.

So I was surprised, when I attempted deployment.  This should be simple. Well – simple by MS standards.  Creating and configuring IIS virtual directories with correct permissions always seems to take me an hour of tinkering.  Often the problem is in setting security for the virtual directory to allow “IIS APPPool\MyNewPool” in addition to network service, my admin account and now WDeployAdmin.  Why isn’t some of this automatic when you make a web application?  Why is the apppool  not even listed in the potential users to add in the security tab (and why do I need to click through 5 windows to get the list of users to add to the folder security).  Sure, I could use Azure,  and trade one complicated set of controls for another; at least with IIS, it is not all a magic black box. I do sometimes wonder if Microsoft is trying to upsell Azure, by keeping IIS unnecessarily complicated.  And sure, I could work on learning server admin and IIS, but I find it hard enough to keep up with application development.   I do typically get IIS sites eventually working after a period of trial and error, and without opening too many security holes.

With asp.net core, I expected to build, publish, run a kestrel console app via IIS.  I am not even crossing platforms – this is Windows Server 2012; this should flow easily just as in the 5 minute demo videos (of which at this point there are very few on asp.net core deployment to IIS).

I followed the deployment instructions on the official asp.net core site (which is much improved over the typical less readable MS technical documentation).   Still, the documentation is not exactly sequential and key setup information is found on subsequent  pages.  Surprisingly, a large part  of the instruction page is devoted to setting-up IIS for first-time web hosting and the more difficult aspects are sparsely documented.

So, I initially missed the required installation of asp.net core on the server (it was right below how to make IIS serve websites).    I then tried VS Web Deploy but received a cryptic error, so I tried deploying a zipped file package and extracting on the server.  But what should be at the app root: Content folder?  website folder?.  Clearly the zipped file did not have the right deployment structure, so I deleted files in the directory to start over.  But even deleting files/folders  was a chore, since one of the folders needed admin permission to delete (content folder) and win explorer running under the admin account on the server does not have admin permission.  Even trying to change folder security to grant me permission to delete did not work.  Fortunately, powershell worked with remove-item.  This by the way, reminds me of trying to delete multiple nested node-module directories in windows explorer which also does not work and requires work-arounds.  Why is explorer so anemic sometimes?   Why, MS, why do you cause me this pain?

So I returned to web deploy method and realized that the HttpPlatformHandler and Web Deploy v3.6 were first needed on the server.   Installed these and then web deploy did not like either a self signed server certificate or my SSL certificate. Tried making the certificates trusted with MMC.exe.

Tried allowing untrusted certificates, but could not find the correct pubxml at first  (which is by the way under project/properties).   Still did not work (fixed the certificate error, but now had a new cryptic error – disallowed user).  Ultimately, using Rick Stahl’s recommendations for PublishMethod and AuthType,  and I was able to find the following settings in pubxml that did work (and did not require allowing untrusted certificates):

 <MSDeployPublishMethod>RemoteAgent</MSDeployPublishMethod>
 <AuthType>NTLM</AuthType>


Really MS?  Rick Stahl published his fix >3-4 months ago and the pubxml default is still not correct.  On the bright side, once web deploy worked, everything fell into line.  The root publish directory now has the web.config, all the DLLs, and 2 folders: refs and wwroot.   And now the deployed asp.net core site works: static files and the api.

At this point, I was still stuck with my original problem – not being able to navigate to a URL within the angular app.  asp.net would intercept the URL first, or the  angular app would start loading and would look for dependencies at the same level of the passed URL (which would be too deep).  Then I hard coded the base href to the full start directory (//mywebsite.com/myVirtualDir/), and it all worked. Will need to see if I can set the index.html base href dynamically (based on prod vs dev settings).     I also used Ben Cull’s recommendations to use file server.  Not sure if all of this is needed, but it works.

Hours wasted on the seemingly simple goal of passing routing through asp.net back to an angular application to manage. Nearly a full day wasted, and with the scattered  litter of failed trial/error attempts.  Solved, but not without scrapes and bruises, and a frustrating unnecessary waste of time.

Log files are also key to digging into the “500: Internal Server Errors”.  By adding NLog Extensions to the asp.net core app, log files are saved to the dir of your choice (c:\temp\) by default.

I do not typically bash Microsoft’s development tooling.  Sure –  I have NOT  spoken kindly of  MS Outlook, MS browsers, Win 8, pocket pcs….  BUT I am a true fan of Visual Studio, VS Code, any language Anders H. wrote (turbo Pascal, C#, Typescript; I never used Delphi).  I even like Powershell (although I use only the simple parts).

So why, Microsoft, why do you make deployment so hard (even to your own windows servers).

 

Addendum 12/6/2016

Ran into more trouble after upgrading to asp.net core 1.1.  The 1.1 version works very well on the development machine, but not so much on the server on deployment.

On the server, I could not get the command tools to work.  First, I could not migrate/update the database (“dotnet ef …”) from the command line.  After much fiddling, I had to manually import database via sql server management studio.

Still on the server, I then tried from the command line, “dotnet myapp.dll”.   This should run the app in Kestrel and give me lots of debug info for problems that arise.   Unfortunately, I kept getting this critical error:

err_kestrelerror

I made sure nothing atypical was being called in startup.cs or program.cs.  I updated all Core SDK and Runtime on the server to 1.1 (and uninstalled 1.0).  I looked around for CLI updates – but it looked like with “Dotnet –info”, I am using the same cli version  (1.0.0-preview2-003121) on both my server and the developer machine (where the CLI run command works).  The Kestrel error could have been a bit more descriptive.   As a last Hail Mary, I tried calling the api via IIS using the URL, and it worked!

I have no idea why CLI Kestrel would not load (yet work via IIS) and why the CLI database tools (dotnet ef…) would not migrate/update the database.  But I’ll add it to my baffled list and move on.

I do not consider 1.1 releases to be extra-early adoption (these are not beta or alpha), but I feel that has been a painful adoption of fully released product.   Asp.net core is a great overall product, but watch out for its sharp edges.

 

One other issue that can be painful to overcome, especially in testing, is CORS.  Rick Stahl has a nice summary of working with this in asp.net core here.

 

Posted in Uncategorized | Comments Off on The dark side of Asp.Net Core: Deployment

On Converting from Angular 1 to Angular 2

Dear diary (blog) – I’ve neglected you. Its been over 2.5 years. I see you haven’t changed – still patiently waiting (along with my unused dental floss, stack of possible junk mail, to-read articles, holiday cards to respond to…) Habits are hard to maintain. Well, I’m back today and I might be seeing you again more frequently.

Note to reader (and mostly myself, since this entry is to keep from having to solve the same problems over again in 3 months): screenshots and example code will be added.

My software development stack changed significantly this summer. Angular and asp.net have each been in their own 2+ year limbos, while waiting for their successors. Each was released in final form this summer: Angular 2 and asp.net core. Along the way, visual studio code has established itself as a very fine code editor (in the Webstorm genre), especially for client side coding.

I have been working on a responsive version of my website, fpnotebook.com. More than a year ago, I created a first version based on angular 1 framework.

In the last month, I worked on converting the first version to one connected to a backend, to manage identity/authentication/authorization and instill backend functionality such as user notes stored in a database. I started with an asp.net core mvc app for note management, which I will discuss in another post. My intent was to get this working with the database and then add an Api that the angular app can call. Identity is the tricky part, especially communicating with client via tokens, and once Identity Server 4 is finalized, I will implement this part.

Conversion from angular 1 to angular 2 is not trivial (at least for me), and I spent the better part of a week converting version 1 to version 2 and completed a working draft: angular 2 version of FPN. It is not complete. The workshop allows selection of content, but you cannot yet generate quizes or other output. Annotations do not work.

My thoughts:

Angular Cli
At first, angular 2 seems so much more complicated than angular 1. With angular 1, I included a couple of scripts on my index page and it all worked. I knew where all my files were; started with js and ended with js.

Angular 2 is not like that. You recognize Angular 2 “quickstart” as a misnomer, after you set-up the app and tooling, set up the vs code editor for launch/debug, typescript compilation, add testing…

However, angular-cli, despite the warnings about still being in beta, worked very well (A+). Magically (via node, webpack) it sets up an environment that takes care of all of the drudgery. It really is a quickstart that includes a basic setup with browser serving, testing (karma, jasmine), typescript compiling, minification, optimization.

Need to add a component with angular cli? At the command prompt, “ng g component named-thing” adds a NamedThingComponent in its own directory, along with a spec test file, html template and css file (or less or scss). The same is available for services, directives, pipes… Angular CLI sets up most of the wiring (add to module file, include in component annotation).

Most of this works great, but there are caveats. Testing is tricky – as mocking dependencies (modules, components, services) is not straight forward. However, the main app component serves as an example for setting this up, and there are quite a few examples on the angular website.

For sometime, I was viewing the test output via the console window, frustrated that I could not see it in the typical nice jasmine output I prefer. At the same time, I was wondering what the Karma debug button did. Add the jasmine report viewer npm package and voila, the jasmine test view is back.

When I setup my workspace, I open the root working directory in windows explorer. I then open the folder in vs code and in two separate powershell windows. In one powershell window, I type ng serve (sets up the app running at localhost:4200) and in the other, ng-test (sets up a browser running karma/jasmine). I use the integrated terminal window in VS Code (Ctrl-`), which I have set to use powershell, to navigate directories and run commands such as those to generate components, directives, services…

The angular-cli set-up is a magical black box, and customization is not intuitive. Scripts and css are added to the angular config file, not to the index.html. Although I can get visual studio code to debug launch the site, I cannot get debugging to work in VS Code with typescript. I had to debug the javascript in Chrome which was workable.

CSS Frameworks

As with nearly all of my single page apps, this responsive version uses bootstrap 3.  Bootstrap 4 has been in alpha for sometime.  Even if I wanted to try the angular material framework, it is still in alpha.  So far, of what I’ve seen with material, I like the bootstrap look better than material.  Having styled responsive websites from scratch with CSS, it is a relief to have professional looking content produced in early development stages without extra effort (albeit with similar appearance to every other bootstrap page).  I often use Bootswatch to at least change the Bootstrap color themes.  I use the LESS precompiler and import the bootstrap/bootswatch “variables.less” file into my own LESS style sheet.

The other decision with Bootstrap, is whether to use the official bootstrap javascript file for dynamic effects (e.g. carousel, accordion/collapse), or to use ng-bootstrap/ui-bootstrap.  I used angular-ui bootstrap for the last version, but the angular 2 ui-bootstrap (ng-bootstrap) is still in development for angular 2.  I’ve found that most of the standard functionality of the official bootstrap.js integrates well with angular (except for more complicated solutions).   In any event, for this solution, I stayed with the standard bootstrap.js.

Finally, there is the integration of css and angular 2.  Angular cli makes this very easy.  When components are created, a css (or less or SCSS) file is also created that is linked to the component and also namespaced for the particular component (using a CSS ID).  In other words the css/less/scss for slideshow would be specific for that component.  At first  I had assumed that I could put  CSS in a parent component and that would flow through to the child components, but that did not work.   For css affecting the entire application, I used the main css file.  However, I had to keep the main css as a plain css file, as I could not get the LESS or SCSS compilers to process the main styles file.  There is probably a way to do this via angular cli setup (without resorting to use other tooling e.g. gulp).

Angular Code, Typescript, Observables and assorted language features

I seem to like languages written by Anders Hejlsberg. I started with Turbo Pascal in the 1980s, C# since the mid-2000s, and I really like Typescript. I like the structure (modules, classes, properties, public/private, interfaces), typing, conveniences (generics, string templates). Yet, there are times when I need to actually use a dynamic object and can just fall back to dynamics and plain old javascript. Finally, using Typescript is using ES6 and ES7 NOW, compiled to compatible code for today’s browsers.

Pairing typescript with angular 2 works very well. Using typescript with angular 1 was awkward. Typescript has modules and classes, and I found using this with angular modules and controllers to be, at times, a confusing mess. Now with angular 2 written in typescript, the code is much cleaner. Plenty of angular 1 code was no longer needed when I moved to angular 2.

Asynchronous processes are tricky. Promises were a bit confusing, but I did somewhat grasp them in the last couple of years. I at least could copy/paste examples from my own code and that of others, modify them and they worked. Observables are another story. When I follow online observable examples, I can grasp that exact example, and it works. But I am challenged when customizing the examples for my own use. Ben Lesh has some very nice intro videos online (check out those from angular connect), and he recommends getting to know a few high yield operators (map, filter, scan, mergeMap, switchMap, CombineLatest, Concat, Do). I am learning, and hopefully the next new approach to async will not come so soon that I can’t master this iteration.

Some mappings from angular 1 to 2 are simple and straightforward. Controller to Component, for example, is not a giant leap (albeit with new annotations, import statements, module definitions). Routing is much improved and I am using the angular version instead of the angular ui version (which I used for my angular 1 projects). It sounds as if the angular team focused on routing efficiency, lazy loading… and I think this shows in the final release. I like the way angular uses native html functionality (e.g. [src] maps to its native src attribute, (click) maps to its native click functionaliy)

One of my initial roadblocks in moving from angular 1 to 2 was how to implement the routing. Using angular 1 with angular UI router, I had routes that used a template with slots for 2 or 3 named outlets- each route defined what would fill each outlet. To solve this, I ended up routing to component views, which would be a basic template comprised of slots filled with other components (instead of router-outlets). For example, the page-view-component has a spot for the chapter-component and page-component.

There are areas which were more difficult to remap. Hrefs are replaced with routerLink. And routes are no longer prefixed with ‘#’ In angular 2, an href will reload the browser/entire framework and navigate to the target (browser refresh and all, lost data…); contrast with routerLink which uses the current angular routing. Although, not difficult conceptually, this remapping was time consuming. One frequent snag was that [routerLink] = “code”, not “string”.

ngBindHtml became [InnerHtml] which at first glance, appears the same, but has one significant difference: only basic html works, not angular directives. We want to avoid href links due to their reloading the browser, but if we convert these to routerLink they will not work within [innerHtml]. My fpnotebook page content for this version, sits in precompiled json code with hyperlinks. This worked very well with ngBindHtml in angular 1, and without it… the project would be hampered. Fortunately, dynamic component template solved this problem.

Angular 2 has changed significantly during its alpha and beta cycles (which is what alpha/beta are for). However, code example online searches yield many answers that will no longer work. Although this is less an issue than asp and asp.net (which has decades of deprecated examples on the web), it does make adapting solutions for the final angular 2 release more difficult.

Several VS Code plug-ins are helpful: Path intellisense (for filename typing in import statements) and Angular 2 typescript snippets.

Overall, the angular 2 experience is a very positive one, especially by using angular cli. The draft in progress is here: angular 2 version of FPN.

The next steps are to integrate with a asp.net core backend and identity server 4. I hope to add code examples and screenshots to this post in the near future.

Until next time (which hopefully, dear diary/blog, will be sooner than 2.5 years).

Posted in Coding | Comments Off on On Converting from Angular 1 to Angular 2

December Fpnotebook Updates

Its been a while since the last post. November and December were busy months at the FPN homestead.

I’ve started logging all Family Practice Notebook updates. 25 major topic updates in December. Not only is the list of updates more complete now, but I’ve stepped up the number of resources reviewed monthly.

December also saw a new, re-vamped look to the Fpnotebook website with some added functionality. Let me know what you think.

Many more topic updates are ready for January release. And in April 2014, I expect to release an Iphone and Android version.

Have a great new year.

Posted in Medicine | Comments Off on December Fpnotebook Updates

September 2013 Fpnotebook Updates

I started work with atmoapps (programmers of the Tarascon mobile app) on native mobile app versions of fpnotebook.  These are planned for release in the first quarter of 2014.  To allow for a more automated demonstration of highlights from site updates, I am creating a new topic in fpnotebook “fpnotebook updates” which will appear in late October.  

 

It was a busy month of reading (about 40 hours on 20 review articles), but here are the highlights.  Intensively updated the airway chapter:  http://www.fpnotebook.com/Lung/Airway/index.htm based on attending Dr. Levitan’s Practical Airway course in Baltimore, MD.  Great course.  

 

Was surprised at how little I remembered about Pituitary Adenomas.  Updated at http://www.fpnotebook.com/Neuro/HemeOnc/PtryAdnm.htm

 

Also updated the hernia section at http://www.fpnotebook.com/Surgery/GI/AbdmnlHrn.htm based on Critical Decisions in Emergency Medicine.

 

September Em:Rap was as usual chock full of new information, but pediatric cardiology was particularly an intense listen/read:

Crashing Newborn: http://www.fpnotebook.com/NICU/Birth/NntlDstrsCs.htm

Congenital heart disease: http://www.fpnotebook.com/CV/Peds/CngntlHrtDs.htm

 

Prescriber’s Letter had various medication precautions (Ketoconazole, Mefloquine, Plavix after CVA) and possible risks (more hype than data) regarding amlodipine and fish oil.

 

Posted in Uncategorized | Comments Off on September 2013 Fpnotebook Updates

They wait outside their ED room’s doorway, with their arms crossed.  A simple visit over, all but the discharge medication faxed, patient instructions added, discharge button clicked.  The visit started well with a pleasant conversation during the examination.  Evaluation efficient and appropriate.  But now they are waiting to go home, and their eyes are throwing daggers.   I could tell them I was interrupted on my way to the desk to complete their discharge. I could tell them I was interrupted by an elderly man who could not breath in Room 2, or chest pain in Room 3 or the clinic calling with a transfer and the radiologist calling with a new malignancy on CT.   But I simply apologize and repeat this cycle over and over again today.  Some days are like this.

One of the aspects of the emergency department that I find most challenging is patient flow in the midst of interruptions.  Critical patient arrivals, patient surges from triage, phone calls, asynchronous result review, documenting, RN questions and pharmacy clarifications.  In busy ED’s, this can amount to more than 10 interruptions per hour in some studies.  My interruptions do not come close to this, but I still at times feel overwhelmed.  

There were times when working clinic, I could fall an hour behind with work-ins and complex patients.  But my patients knew me and they would still be cordial to me despite the delay.   This is not always true in the ED.   I try to move fast, practice the best quality I know how, provide the best service I can and disposition as quickly as possible.  Despite this, they stand in their room’s doorway with their arms crossed angry that they are still waiting for discharge.  

In clinic, there was an overflow escape known as a schedule that would limit the flow of patients in.  On the worst days, there was still time for a sandwich and bathroom break.  There was still the appreciation for care from most of my patients.  

But today the emergency department surge continues.  It is on these days, that histories and exams are clipped and compressed and sign-outs to hospitalists are mediocre at best.   No time to open a reference or  consider a quality differential.  Hungry, fatigued and operating on clinical reflexes and base knowledge.  Small conflicts with sparks of dissatisfaction by patients and hospitalists.  I could explain the attempts at maximizing patient status over hours in the ED.  How I tried to stave off an admission, but this failed.  Now the family is frustrated with the long ED course, and the hospitalist sees this as a dump.   

So I continue to optimize my emergency department task juggling and patient flow.  But today as I leave the department, my shift over, I can still see a patient standing in their room’s doorway with their arms crossed and angry.   This was a challenging day.

Posted on by sjmoses | Comments Off on A Challenging Day in the Emergency Department