Archive

Monthly Archives: June 2015

Android

Intro

As a WPF Developer, I took my talents to South Beach and quickly discovered that Miami is a tough market for native Windows Development. As a result, I decided to transition from Visual Studio to Android Studio. In doing so, it only makes sense to migrate the productivity practices that I acquired from .NET development over to Java development. In this article, I will discuss how to port over some productivity features from Visual Studio to Android Studio.

Key Bindings

Key bindings (also known as shortcut keys) are essential for software development productivity.

To leverage common Visual Studio shortcuts within Android Studio, perform the following:

General Keymap settings

  1. Select File | Settings | Key Bindings.
  2. Select “Visual Studio copy” from the “KeyMap” combobox.

Key Bindings

To change a key-binding, select an action in the “Settings” window and enter a key-combination.

Use the search box to find shortcuts to modify:

      • Build Solution
        • Other | Build
          • Ctrl + Shift + B
      • Rename
        • Main Menu | Refactor | Rename…
          • Ctrl + R, R
      • Extract Method
        • Main Menu | Refactor | Extract | Method…
          • Ctrl + R, M

Snippets

Code snippets are great. They enable a user to auto-generate code from just a few characters that are typed in.

In Android Studio, Code Snippets are referred to as Live Templates.

  1. Select File | Settings | Live Templates
  2. Click Green “+” on right margin
  3. Set “Abbreviation” field to “prop”
  4. Set “Description” field to “property getter/setter”
  5. Set “Template Text” field to the following:
private String _stringProperty = null;

public String GetstringProperty(){
   return _stringProperty ;
}

public void SetstringProperty(String value){
   _stringProperty = value;
}

Code Generation

To leverage the code snippet (aka: Live Template), perform the following:

    1. Select File | Settings | Key Bindings
    2. Navigate to Main Menu | Code | Generate
    3. Add shortcut: Ctrl + Period

Example:

“Prop” + Ctrl+J, Space

Conclusion

As a WPF Developer, I took my talents to South Beach and quickly discovered that Miami is a tough market for native Windows Development. As a result, I decided to transition from Visual Studio to Android Studio. In doing so, it only makes sense to migrate the productivity practices that I acquired from .NET development over to Java development. In this article, I discussed how to port over some productivity features from Visual Studio to Android Studio.

Resignation1As discussed earlier, I am very much interested in contributing to your company’s success.

I sincerely hope that my discrepancy with current members of lower-management can be resolved. I do recognize that your company has invested in my training and has provided me sensitive information regarding their products. I still want to continue providing a return on your company’s investment.

Please note that I have already demonstrated that I can save your company myriads of dollars per project by doing what the existing software engineering department said was impossible. I developed 95% of a feature by myself that consisted of 700 lines of code targeting business logic, user interfaces, and 76 automated tests that tested 90% of the code for that feature.

Instead of spending several minutes to get an application in a state to debug an issue, I have already demonstrated that test automation can quickly diagnose an issue within seconds. In addition, I have demonstrated that automated tests can discover breaking changes in code within a second instead of developers or QA observing the defect days or weeks later (when it’s too late).

In conclusion, I am extremely motivated to help your company produce more value in a shorter amount of time and with less money. I have made enough mistakes in my twelve years of building test driven solutions that I have become an expert on what not to do. Unfortunately, the current culture in the software engineering department does not embrace my experience and refuses to acknowledge their own growth opportunities. I sincerely believe that people with my skills can be part of your department’s 20% investment that generates 80% of the return.

I wrote that letter to HR after handing in my resignation. I was hired as a consultant on the tail-end of a doomed project that resulted in the lead developer jumping ship. The doomed project was a maintenance nightmare in which developers were working mandatory weekends for several weeks in an attempt to drive down the defect count before a client deadline. Developers stumbled over each other, modifying files that were already being modified by another developer and chasing defects that had once been resolved and then resurfaced later. These developers downright refused to write automated tests to assist their development effort. As a result, they had no clue what features they were breaking at the expense of the defects that they were attempting to resolve. QA would not even get their hands on the software until after the all features were to be constructed. Then QA would provide a laundry list of feedback for all of the software which resulted in expensive modifications coming late in the SDLC instead of cheap resolutions early in the SDLC.

At the end, everybody loss. The business couldn’t sell the software they promised. The software they built that cost hundreds of thousands of dollars was ultimately rejected by their client.

The project I worked on was very small. At most, the project required two application developers. However, for whatever reason, the project consisted of four application developers. These developers were very excited to start the new project. After all, they were coming off of a failed one. They spent close to three weeks discussing UML diagrams and the entities that they just knew would be required for the feature to be built. As they commenced big upfront design using the continuously failing Waterfall methodology, I cranked out 95% of the feature using Test-Driven Development while they were in their design phase drawing pretty diagrams and debating over entity names. The module that I wrote to implement the feature was self-contained and thoroughly tested. Every requirement identified for the feature had a unit test to ensure compliance. I implemented both the UI and business logic required for the feature and stubbed out the external dependencies so that I could test the implementation of the feature in isolation from the rest of the system.

I was very happy with the results of my effort. When the rest of the developers wrapped up their plans for the overall design, I showed them the working software that was ready to be connected, polished, and delivered. Yes, I demonstrated the benefits of applying agile practices to deliver thoroughly tested software far quicker and with higher quality than their waterfall-inspired approach. Hence, I learned overtime that clients just don’t care about the deliverables of the journey for building software. They just want to see the software ASAP so that they can make vital decisions sooner for their business.

The lead on the team just didn’t care about the efficiency of how the feature was built. He didn’t care that automated tests executed 90% of the code. He only cared about his title. After all, he was “Chief Architect” at the company even though there were no other architects in the company. The team took what I had and made modifications to the code. This was fine. However, their modifications broke the automated tests. That was bad. As a result, I sent an email that initially praised their updates. At the end of the email I requested that the team execute the tests that already exists before checking in code. I reminded them that these tests provide valuable feedback about changes to the code that will save the business time and money based on early detection of breaking changes. I thought my request was perfectly valid. After all, don’t we all want to be notified of breaking changes as soon as we break a feature? For this company, the answer was no.

The next morning at the daily standup, the lead on the team told members to disregard my request. He said that the project is small and just doesn’t warrant the overhead of tests. I couldn’t believe what I was hearing. Everybody knows that every successful project no matter how small it is will be extended with more features. These projects then evolve into complex systems over time. Hence, every small project is a seed that will grow to be a bigger one if it’s successful. Every “architect” knows that, especially a “chief architect”. So why would any stakeholder refuse to have automated tests?

To be Continued…

NOTE:

Scott Nimrod is fascinated with Software Craftsmanship.

He loves responding to feedback and encourages people to share his articles.

He can be reached at scott.nimrod @ bizmonger.net

Intro

Whenever I start new projects for clients, I find myself constructing a new test API from scratch. A test API is a utility or library that provides common test scenario dependencies for automated tests. I was never conscious of the repetitive task of implementing a new test API until I attempted to implement the UserLogin feature in reference to the Feature-Driven Architecture article that I wrote.

After constructing my solution explorer, I decided to perform TDD on the business logic required for user-login. I immediately began to construct a test API so that I could provide general values for test-input. For example, I created a constant called “SOME_ID” that you guessed it, had some arbitrary integer assigned to it. I then created another constant called “SOME_PASSWORD” that returned some arbitrary text to denote a password. After adding several constants to this API, I finally recognized value in making this library available on Nuget for others to use when performing TDD. As a result, I’m proud to announce that Bizmonger.TestAPI is now available on Nuget.

The following is the implementation for user authentication using this library.

Test Class:

using Login.UI;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using TestAPI;

namespace Login.Tests
{
    [TestClass]
    public class _Login
    {
        [TestMethod]
        public void valid_user_is_authenticated()
        {
            // Setup
            var login = new Commands().Login;
            var viewModel = new ViewModel();

            // Test
            var args = new object[3] { Gimme.SOME_ID,
                                       Gimme.SOME_PASSWORD,
                                       Gimme.SOME_AUTHENTICATOR_THAT_PASSES };
            login.Execute(args);

            // Verify
            Assert.IsTrue(viewModel.IsAuthenticated);
        }

        [TestMethod]
        public void invalid_user_is_not_authenticated()
        {
            // Setup
            var login = new Commands().Login;
            var viewModel = new ViewModel();

            // Test
            var args = new object[3] { Gimme.SOME_ID,
                                       Gimme.SOME_PASSWORD,
                                       Gimme.SOME_AUTHENTICATOR_THAT_FAILS };
            login.Execute(args);

            // Verify
            Assert.IsTrue(!viewModel.IsAuthenticated);
        }
    }
}

The “Gimme” class
In my test API, I created a class called Gimme. The sole purpose of this class is to provide arbitrary values for business logic dependencies. Such dependencies could be a numeric value, a text value, or really long text.

Example:

. . . 

var args = new object[3] { Gimme.SOME_ID,
                           Gimme.SOME_PASSWORD,
                           Gimme.SOME_AUTHENTICATOR_THAT_PASSES };
login.Execute(args);

Now, I’m on the fence when it comes to which syntax to use. The above code is more expressive. Hence, we used the prefix “Gimme”. However, we can also remove that prefix by referencing the Gimme class in our test class namespace.

Usage Example:

using Login.UI;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using static TestAPI.Gimme;

. . .

var args = new object[3] { SOME_ID,
                           SOME_PASSWORD,
                           SOME_AUTHENTICATOR_THAT_PASSES };

login.Execute(args);

Gimme Definition:

using TestAPI.Authenticators;

namespace TestAPI
{
    public partial class Gimme
    {
        public const int SOME_ID = 123456;
        public const int SOME_OTHER_ID = 456789;

        public const string SOME_EMAIL_ADDRESS = "ABC@BIZMONGER.COM";
        public const string SOME_OTHER_EMAIL_ADDRESS = "XYZ@BIZMONGER.NET";

        public const string SOME_PHONE_NUMBER = "2162914078";
        public const string SOME_OTHER_PHONE_NUMBER = "2169322921";

        public const string SOME_PASSWORD = "some_pasword";
        public const string SOME_OTHER_PASSWORD = "some_other_pasword";

        public const string SOME_SHORT_TEXT = "some_text";
        public const string SOME_OTHER_SHORT_TEXT = "some_alternative_text";

        public string SOME_LONG_TEXT = new string('X', 5000);
        public string SOME_OTHER_LONG_TEXT = new string('Y', 5000);

        public const int SOME_INTEGER_VALUE = 123;
        public const int SOME_OTHER_INTEGER_VALUE = 456;

        public const long SOME_LONG_NUMERIC_VALUE = 1234567890123456789;
        public const long SOME_OTHER_LONG_NUMERIC_VALUE = 9876543212345678;

        public const decimal SOME_DECIMAL_VALUE = 123.12345M;
        public const decimal SOME_OTHER_DECIMAL_VALUE = 456.45678M;

        public const decimal SOME_CURRENCY_VALUE = 99.99M;
        public const decimal SOME_OTHER_CURRENCY_VALUE = 49.99M;

        public static SuccessAuthenticator SOME_AUTHENTICATOR_THAT_PASSES = new SuccessAuthenticator();
        public static FailedAuthenticator SOME_AUTHENTICATOR_THAT_FAILS = new FailedAuthenticator();
    }
}

If you’re dying to see the System under Test (SUT) implementation then look below.

Commands Definition:

using Bizmonger.Patterns;
using System.Windows.Input;
using Login.Core;

namespace Login.UI
{
    public class Commands
    {
        public Commands()
        {
            Login = new DelegateCommand(OnLogin);
        }

        private void OnLogin(object obj)
        {
            var args = obj as object[];
            var userId = args[0] as string;
            var password = args[1] as string;
            var authenticator = args[2] as IAuthenticator;

            MessageBus.Instance.Publish("IsAuthenticated", authenticator.Authenticate());
        }

        public ICommand Login { get; private set; }
    }
}

ViewModel Definition:

using Bizmonger.Patterns;

namespace Login.UI
{
    public class ViewModel
    {
        public ViewModel()
        {
            MessageBus.Instance.Subscribe("IsAuthenticated", obj => IsAuthenticated = (bool)obj);
        }

        public bool IsAuthenticated { get; private set; }
    }
}

Conclusion

In conclusion, when I started new projects for clients, I found myself constructing a new test API from scratch each time. I was never conscious of this repetitive task until I reviewed my post on Feature-Driven Architecture. I then realized that I could publish a tool that can not only benefit me, but also benefit my community from performing the same mundane tasks when practicing TDD by leveraging a common TestAPI. As a result, developers can now pull down Bizmonger.TestAPI on Nuget to support their TDD.

NOTE:

Scott Nimrod is fascinated with Software Craftsmanship.

He loves responding to feedback and encourages people to share his articles.

He can be reached at scott.nimrod @ bizmonger.net

Feature

A feature can be described as a group of stories that are related and deliver a package of functionality that end users would generally expect to get all at once.
Features

Each library represents a feature that contains a vertical slice of functionality within the application. This means that each library is fully functional from the User interface all the way to the data access layer.

MyImage1

Infrastructure

As the architecture continues to get flushed out by features, common code can be identified and consolidated into a shared library. For this shared library, we can call it “Core”.

MyImage3

Feature Composition

Earlier, I had described a basic breakdown of an architectural design. However, each feature library that was identified, tightly coupled the user-interface, model, and the data access layer.

We can now split these responsibilities into their own libraries as follows:

MyImage4

As depicted above, each feature has been broken down into three separate libraries:

  • UI (User Interface)
  • Core (Infrastructure)
  • DAL (Data Access Layer)

By composing a feature into logical layers this way, our solution can now have stronger compliance with the Single Responsibility Principle and as a result, reduce the amount of code that needs to be recompiled in the event that updates are made to the codebase.

We could then focus on the feature itself by setting the scope within Solution Explorer:

MyImage5

We now have complete focus on the feature we’re maintaining as depicted:

MyImage6

Conclusion

In conclusion, I have explained a method of building an architecture. By using this methodology to build a system, we can quickly identify the primary functions of the system without having to drill into folders called “Models”, “Views”, or “Repositories”. As a result, our Solution Explorer is now the blueprint for the system’s architecture.

NOTE:

Scott Nimrod is fascinated with Software Craftsmanship.

He loves responding to feedback and encourages people to share his articles.

He can be reached at scott.nimrod @ bizmonger.net

Intro

The majority of us are unknowingly violating the Single Responsibility Principle when constructing our view-models. The XAML below consists of three buttons that when pressed will show a description relative to each button. There is also a clear button to clear all description fields.

XAML:

<Window .
        . . .
        xmlns:local="clr-namespace:WpfApplication2">

    <Window.Resources>
        <Style TargetType="Button">
            <Setter Property="Height" Value="40" />
            <Setter Property="Margin" Value="5" />
        </Style>
    </Window.Resources>
    
    <Window.DataContext>
        <local:ViewModel />    
    </Window.DataContext>

    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition Height="auto" />
            <RowDefinition Height="auto" />
            <RowDefinition Height="auto" />
        </Grid.RowDefinitions>
        <Grid.ColumnDefinitions>
            <ColumnDefinition />
            <ColumnDefinition />
            <ColumnDefinition />
        </Grid.ColumnDefinitions>

        <Button Grid.Column="0" Grid.Row="0" Content="Command1" Command="{Binding Command1}" />
        <TextBlock Grid.Column="0" Grid.Row="1" Text="{Binding Description1}" />

        <Button Grid.Column="1" Grid.Row="0" Content="Command2" Command="{Binding Command2}" />
        <TextBlock Grid.Column="1" Grid.Row="1" Text="{Binding Description2}" />

        <Button Grid.Column="2" Grid.Row="0" Content="Command3" Command="{Binding Command3}" />
        <TextBlock Grid.Column="2" Grid.Row="1" Text="{Binding Description3}" />
        
        <Button Grid.Column="1" Grid.Row="2" Content="Clear" Command="{Binding ClearCommand}" />
    </Grid>
</Window>

Now Take the following view-model:

    public class ViewModel : INotifyPropertyChanged
    {
        public event PropertyChangedEventHandler PropertyChanged;

        public ViewModel()
        {
            Command1 = new DelegateCommand(OnDescription1Requested);
            Command2 = new DelegateCommand(OnDescription2Requested);
            Command3 = new DelegateCommand(OnDescription3Requested);

            ClearCommand = new DelegateCommand(obj => Description1 = Description2 = Description3 = string.Empty);
        }

        string _description1 = null;
        public string Description1
        {
            get { return _description1; }
            set
            {
                if (_description1 != value)
                {
                    _description1 = value;
                    OnNotifyPropertyChanged();
                }
            }
        }

        string _description2 = null;
        public string Description2
        {
            get { return _description2; }
            set
            {
                if (_description2 != value)
                {
                    _description2 = value;
                    OnNotifyPropertyChanged();
                }
            }
        }

        string _description3 = null;
        public string Description3
        {
            get { return _description3; }
            set
            {
                if (_description3 != value)
                {
                    _description3 = value;
                    OnNotifyPropertyChanged();
                }
            }
        }

        public ICommand Command1 { get; private set; }
        public ICommand Command2 { get; private set; }
        public ICommand Command3 { get; private set; }
        public ICommand ClearCommand { get; private set; }

        private void OnNotifyPropertyChanged([CallerMemberName] string propertyName = "")
        {
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
        }

        
 
        private void OnDescription1Requested(object obj)
        {
            // Some setup logic before property value is assigned. . .

            // Then assign property value
            Description1 = "some text for description1";
        }

        private void OnDescription2Requested(object obj)
        {
            // Some setup logic before property value is assigned. . .

            // Then assign property value
            Description2 = "some text for description2";
        }

        private void OnDescription3Requested(object obj)
        {
            // Some setup logic before property value is assigned. . .

            // Then assign property value
            Description3 = "some text for description3";
        }
    }

This view-model handles state and accepts commands. So what’s wrong with this view-model? Maybe nothing. It provides the necessities that our view requires to present itself and to submit requests. But look deeper into this view-model. What is its purpose? What is it severely focused on or in other words, how well is it focused?

This view-model is handling two responsibilities. It’s reflecting state and it’s also managing requests. That alone violates the Single Responsibility Principle which is meant to protect us from maintenance concerns. How ironic that WPF’s core design principle is to separate presentation from behavior. However, when practicing MVVM, we religiously bloat our view-models to do the exact opposite and end up coupling behaviors with states without giving it a second thought.

An Alternative Approach

What if we could create a mechanism for separating responsibilities of state and behavior? Specifically, why can’t we create a responder to handle view requests and use our view-model to only reflect the state that our views require for presenting? Then, our actual view-model could be a lot cleaner. Thus, we could offload behaviors of our view-model to a more focused class whose sole responsibility is to respond to commands. Now that sounds like a much better solution to what we have currently.

I acknowledge that some people may say that my recommendation adds unnecessary complexity and that adding this complexity does not produce any noticeable value. Their assumptions are valid. At least until several people begin to maintain that same view-model. It is then that the view-model’s complexity skyrockets out of control and slows down developers from making progress. The view-model’s initial purpose becomes obscured with many responsibilities and results in the view-model appearing more like congress (slow, complex, and in debt).
Creating a ViewModelBase

Let’s start by creating a base class that will manage databinding notifications.

    public class ViewModelBase : INotifyPropertyChanged
    {
        public event PropertyChangedEventHandler PropertyChanged;

        protected void OnNotifyPropertyChanged([CallerMemberName] string propertyName = "")
        {
            PropertyChanged?.Invoke(this, new PropertyChangedEventArgs(propertyName));
        }
    }

Adding a Responder

Next, let’s add a responder . The responder’s core responsibility is to respond to commands. We’ll also use a pub/sub mechanism to perform messaging agnostically between client and server. In this case we use the “Bizmonger.Patterns” package that can be downloaded from Nuget. It provides a MessageBus to accomplish messaging. In this class we will publish messages after we have processed a command.

    public class Responder : ViewModelBase
    {
        public Responder()
        {
            Command1 = new DelegateCommand(OnDescription1Requested);
            Command2 = new DelegateCommand(OnDescription2Requested);
            Command3 = new DelegateCommand(OnDescription3Requested);

            ClearCommand = new DelegateCommand(obj => MessageBus.Instance.Publish("clear_descriptions"));
        }

        public ICommand Command1 { get; private set; }
        public ICommand Command2 { get; private set; }
        public ICommand Command3 { get; private set; }
        public ICommand ClearCommand { get; private set; }

        private void OnDescription1Requested(object obj)
        {
            // Some setup logic before property value is assigned. . .

            // Then assign property value
            MessageBus.Instance.Publish("Description1", "some text for description1");
        }

        private void OnDescription2Requested(object obj)
        {
            // Some setup logic before property value is assigned. . .

            // Then assign property value
            MessageBus.Instance.Publish("Description2", "some text for description2");
        }

        private void OnDescription3Requested(object obj)
        {
            // Some setup logic before property value is assigned. . .

            // Then assign property value
            MessageBus.Instance.Publish("Description3", "some text for description3");
        }
    }

Update ViewModel

Let’s remove all command references from this view-model and just let this class focus on state. We will use subscriptions for information updates in our constructor using the MessageBus.

    public class ViewModel : ViewModelBase
    {
        public ViewModel()
        {
            MessageBus.Instance.Subscribe("clear_descriptions", 
                obj => Description1 = Description2 = Description3 = string.Empty);

            MessageBus.Instance.Subscribe("Description1", obj => Description1 = obj as string);
            MessageBus.Instance.Subscribe("Description2", obj => Description2 = obj as string);
            MessageBus.Instance.Subscribe("Description3", obj => Description3 = obj as string);
        }

        string _description1 = null;
        public string Description1
        {
            get { return _description1; }
            set
            {
                if (_description1 != value)
                {
                    _description1 = value;
                    OnNotifyPropertyChanged();
                }
            }
        }

        string _description2 = null;
        public string Description2
        {
            get { return _description2; }
            set
            {
                if (_description2 != value)
                {
                    _description2 = value;
                    OnNotifyPropertyChanged();
                }
            }
        }

        string _description3 = null;
        public string Description3
        {
            get { return _description3; }
            set
            {
                if (_description3 != value)
                {
                    _description3 = value;
                    OnNotifyPropertyChanged();
                }
            }
        }
    }

Update XAML

Let’s add a responder as a resource and use that as our data context for all commands within our XAML.

<Window .
        .
        xmlns:local="clr-namespace:WpfApplication2">

    <Window.Resources>
        <Style TargetType="Button">
            <Setter Property="Height" Value="40" />
            <Setter Property="Margin" Value="5" />
        </Style>
        
        <local:Responder x:Key="Responder " />
    </Window.Resources>
    
    <Window.DataContext>
        <local:ViewModel />    
    </Window.DataContext>

    <Grid>
        <Grid.RowDefinitions>
            <RowDefinition Height="auto" />
            <RowDefinition Height="auto" />
            <RowDefinition Height="auto" />
        </Grid.RowDefinitions>
        <Grid.ColumnDefinitions>
            <ColumnDefinition />
            <ColumnDefinition />
            <ColumnDefinition />
        </Grid.ColumnDefinitions>

        <Button DataContext="{StaticResource Responder }" Grid.Column="0" Grid.Row="0" Content="Command1" Command="{Binding Command1}" />
        <TextBlock Grid.Column="0" Grid.Row="1" Text="{Binding Description1}" />

        <Button DataContext="{StaticResource Responder }" Grid.Column="1" Grid.Row="0" Content="Command2" Command="{Binding Command2}" />
        <TextBlock Grid.Column="1" Grid.Row="1" Text="{Binding Description2}" />

        <Button DataContext="{StaticResource Responder }" Grid.Column="2" Grid.Row="0" Content="Command3" Command="{Binding Command3}" />
        <TextBlock Grid.Column="2" Grid.Row="1" Text="{Binding Description3}" />
        
        <Button DataContext="{StaticResource Responder }" Grid.Column="1" Grid.Row="2" Content="Clear" Command="{Binding ClearCommand}" />
    </Grid>
</Window>

Conclusion

In conclusion, the majority of us are unknowingly violating the Single Responsibility Principle when constructing our view-models. Our view-models are usually handling two responsibilities. They’re reflecting state and also managing client requests. That alone violates the Single Responsibility Principle which is meant to protect us from maintenance concerns. When practicing MVVM, we unconsciously bloat our view-models with multiple responsibilities which results in coupling behaviors with state information which WPF is designed to protect ourselves against. Instead of accruing this technical debt, we can separate these concerns by creating a separate mechanism for command response and use our view-models to only reflect state for our views to present.

NOTE:

Scott Nimrod is fascinated with Software Craftsmanship.

He loves responding to feedback and encourages people to share his articles.

He can be reached at scott.nimrod @ bizmonger.net

sinaloa+150[1]

If you think the title of this article is inappropriate then surely you have never entertained yourself with Hollywood movies or primetime television. The very content of such programs entertains our subconscious desires. Note that it’s been less than a thousand years since religious institutions have gone mainstream in programming our primal behaviors to be more humane through (you guessed it) primal behaviors (i.e. war, slavery, crusades, conquest, etc.).

So why the offensive header for this article?

Well, I no longer watch TV (i.e. sitcoms, reality shows, music television). However, I do entertain myself by watching documentaries on Netflix with interesting programs on prison life, wars, drug cartels, and astronomy. After watching such programs, I typically think a lot about how I would handle myself in those conditions. Ultimately with the exception of astronomy, I usually become stressed out and depressed as a direct result of consuming such content. However, despite everything in this world that I cannot control nor educate others on, I have always enjoyed escaping reality through my work as a software developer. I can literally create my own world of virtual autonomous workers to help me solve a business problem that will ultimately save a client lots of money or make a client lots of money.

When I am writing code, I am literally in my own world of “what if” possibilities and “oh crap!” consequences. I can literally see mirages of what I am building in my foresight even though my computer monitors show only swaths of text that no mere mortal could possibly understand.

Some people sleep around to mask their pain. Others smoke crack to stop the voices in their head. Personally, I like to drive to work at 5:30 or 6 o’clock in the morning and just slang code that ultimately makes this world a little bit better than it was. In conclusion, I tend to get lost creating more than I do consuming.

NOTE:

Scott Nimrod is fascinated with Software Craftsmanship.

He loves responding to feedback and encourages people to share his articles.

He can be reached at scott.nimrod @ bizmonger.net

How would you write a program that accepts command line inputs?
For example:

C:\>vl model

C:\>al repository

A common approach would be to use a switch statement to handle the commands strings: “vl” and “al”. Once the case for the command string is identified, then the arguments associated to the command would be processed in correlation.

Example:

  var someCommand = "al model";
        var commandRoot = someCommand.Split(' ');

        switch (commandRoot[0])
        {
            case "al":
                {
                    // Some logic...
                    break;
                }
        }

The problem with switch statements
Switch statements should only be used if they are operating on structures that do not have to be modified for extendibility. An example of this would be an enum structure that represents visibility states (Visible, Hidden, and Collapsed).

Example:

enum Visibility
{
    Visible,
    Hidden,
    Collapsed
}

In this case, the Visibility enum does not have many reasons to be extended based on its single responsibility of conveying the three states that this enum provides. As a result, using a switch statement for an enum that is less likely to change is low maintenance.

However attempting to use a switch statement that requires modification each time a new business rule is introduced into the system becomes problematic. Specifically, this is an anti-pattern that violates the Open/Close Principle. The Open/Close Principle states that code should be open for extensibility but closed for modification.

Returning back to the command line processing algorithm, we would have to modify the switch statement and add a new case block each time a new command gets. Thus, this code would have to be recompiled and so would all the code that depends on this code within the system that harbors it.

An alternative approach would be to implement a messaging system where observers can subscribe to a command that they’re interested in. When a command input is entered into the system, that command is then broadcasted to the subscribers of that particular command for processing.

The following is some code I wrote for practice:

Client Request:

            ExecuteCommand = new DelegateCommand(obj =>
                {
                    _subscription.SubscribeFirstPublication(Messages.COMMAND_PROCESSED, OnProcessed);

                    MessageBus.Instance.Publish(Messages.COMMAND_LINE_SUBMITTED, “addmodule my_module”);
                });

Command Dispatching:

    public class CommandLinePreprossor
    {
        static CommandLinePreprossor()
        {
            MessageBus.Instance.Subscribe(Messages.COMMAND_LINE_SUBMITTED, obj =>
                {
                    var commandLine = obj as string;

                    if (string.IsNullOrWhiteSpace(commandLine))
                    {
                        MessageBus.Instance.Publish(Messages.COMMAND_PROCESSED, CommandStatus.Failed);
                    }

                    var rootCommand = commandLine.Split(' ').First().ToLower();
                    var handled = false;

                    MessageBus.Instance.SubscribeFirstPublication(Messages.COMMAND_LINE_HANDLER_FOUND, o =>
                        {
                            handled = true;
                        });

                    MessageBus.Instance.Publish(rootCommand, commandLine);

                    if (!handled)
                    {
                        MessageBus.Instance.Publish(Messages.COMMAND_PROCESSED, new CommandContext() { Line = commandLine, Status = CommandStatus.Failed });
                    }
                });
        }
    }
}

Command Handler:

    public class AddModuleCommand : CommandBase
    {
        #region Constants
        const string ADD_MODULE_COMMAND_FULL_TEXT = "addmodule";
        const string ADD_MODULE_COMMAND_SHORT_TEXT = "am";
        #endregion

        public override void Initialize()
        {
            MessageBus.Instance.Subscribe(ADD_MODULE_COMMAND_FULL_TEXT, obj => Execute(obj as string));
            MessageBus.Instance.Subscribe(ADD_MODULE_COMMAND_SHORT_TEXT, obj => Execute(obj as string));
        }

        public void Execute(string line)
        {
            MessageBus.Instance.Publish(Messages.COMMAND_LINE_HANDLER_FOUND);

            var isValidInstruction = BaseValidator.Validate(line, ADD_MODULE_COMMAND_FULL_TEXT, ADD_MODULE_COMMAND_SHORT_TEXT);

            if (!isValidInstruction)
            {
                MessageBus.Instance.Publish(Messages.COMMAND_PROCESSED, new CommandContext() { Line = line, Status = CommandStatus.Failed });
            }
            else
            {
                var tokens = line.Split(' ');

                _services.AddModule(new Entities.Layer() { Id = tokens[1] });
                MessageBus.Instance.Publish(Messages.COMMAND_PROCESSED, new CommandContext() { Line = line, Status = CommandStatus.Succeeded });
            }
        }
    }

 

Messages:

    public class Messages
    {
        public const string COMMAND_LINE_SUBMITTED = "COMMAND_LINE_SUBMITTED";
        public const string COMMAND_PROCESSED = "COMMAND_LINE_PROCESSED";
        public const string COMMAND_LINE_HANDLER_FOUND = "COMMAND_LINE_HANDLER_FOUND";
    }