Category Archives: Uncategorized

HTTP 301: The requested resource SHOULD be accessed through returned URI in Location Header.

Ever attempted to test a public API using, what you think is the correct credentials and query parameters only to get the above error?

For example, I tried the SumoLogic Search Jobs API with what I thought was the correct credentials and query parameters and kept getting this response:

{
 "status" : 301,
 "id" : "OLXXO-HINM6-3BXX7",
 "code" : "moved",
 "message" : "The requested resource SHOULD be accessed through returned URI in Location Header."
}

I even passed in an invalid URL and received the same response with a different “id”.

Quick solution to this problem is to run the curl command with the -v (verbose) option and it will spit out the location you are supposed to target as shown below:

< Cache-control: no-cache="set-cookie"
< Content-Type: application/json; charset=ISO-8859-1
< Date: Sat, 05 Nov 2016 03:59:46 GMT
< Location: https://api.us2.sumologic.com/api/v1/search/jobs?query=nnnnn

In my case, I was targeting:

https://api.sumologic.com/api/v1/search/jobs

instead of

https://api.us2.sumologic.com/api/v1/search/jobs

Sprint story work break down – how do you do it?

Every agile team appears to break down the tasks involved in completing a sprint story item a little different  Also, every team  appears to have a different perspective on the Srum Definition of Done.  My perspective aligns closely with one of my previous manager’s definition:

A story is considered done, if it is shelvable, packageable and shippable.  Basically at the toss of a hat, it can be deployed to production or made available to end customers with all associated artifacts including supporting documentation.

This is the philosophy which I use as guideline when confronted with the task of splitting down sprint stories.  And why so?

Let’s start by stating the assumption that we have a story with a well written, understood and itemized set of user, technical or deployment requirements. These requirements should drive both development, test and documentation.  Or should they?

When a story is considered done, it should be possible to validate that each requirement was met, including existence of a shippable component to end customers, whoever they may be.   Such that at the end of the sprint, the team collectively as a whole, including PM and additional stakeholders should be able to validate that each of the requirements listed in the story is appropriately captured in the resulting artifact.

There are various distinct developmental tasks involved in taking a marketable idea from concept to market.  These tasks include:

  • Design : captures some high level design activity including UX work for UI related stories.
  • Development : everyone knows what this is all about.  Yes, this is the task which captures all coding effort.
  • QA: everyone also should know what this is about.   Someone has to validate the output from the Development tasks to ensure it does meet user requirements specified in the story. An interesting point of contention I have run into involves the source document from which QA should author test cases.  One school of thought says test cases should be driven from user requirements in the story and another says  from the output of the design task. This alone is an interesting topic on its own.
  • Deployment : this task captures work such as creating of chef scripts or other activities dedicated to ensuring code makes it from the build machine into our production servers.  Some companies use DevOps engineers for this.  Others get the same developers to do it all.  Again another interesting topic on its own.
  • Documentation: almost a task which never gets the time of day, especially if product is targeted to internal customers.

On one team, we had a hard and fast rule stating that EVERY story should be broken down into each of the aforementioned subtasks.  However, as with a lot of things in life such hard and fast rules do not apply and often times completely break down, forcing the team into a “process oriented as opposed to goal oriented mindset” as one of my team members  succinctly puts it.  I see these tasks as mere guidelines. Completing some stories will require all of these subtasks while others will not and it is up to the Scrum master in consultation with the team to use wise judgement to make this decision.

Should these distinct activities be captured by individual JIRA subtasks?  I personally think so.  Individual subtasks allow distinct teams to start the work in parrallel, enabling early engagement by all respective teams, possibly allowing for faster delivery of the feature.  Of course, the assumption here is we have distinct teams responsible for each facet of the development workflow.  If we have a single developer responsible for orchestrating each of the aforementioned phases, it is no necessary to explicitly break down the story into such tasks even though it may be worth capturing the effort required during each stage of the workflow.

What do you other Scrum Masters do?

Should models have simple methods

I recently ran into an interesting conversation with some members of my team.

We have a class to define configuration of a communication infrastructure.  While utilizing an instance of this class with a factory,  we decided to create several methods in the model to allow us perform simple checks, such as IsChannelEnabled as illustrated below:


public class ClientCommunicationConfiguration
 {

public string Version { get; set; }
 public string Description { get; set; }
 public bool IsEnabled { get; set; }
 public List<ChannelConfiguration> Channels { get; set; }

public bool HasChannels()
 {
 return Channels != null && Channels.Any();
 }

public bool IsChannelEnabled(string channelName)
 {
 if (!HasChannels())
 return false;

return Channels.Any(c => string.Compare(c.Name, channelName,StringComparison.OrdinalIgnoreCase) == 0 &amp;amp;&amp;amp; c.IsEnabled);
 }

public bool IsMethodEnabled(string messageName)
 {
 return Channels
 .Where(c => c.HasMessages() && c.IsEnabled)
 .SelectMany(c => c.Messages)
 .Any(m => string.Compare(m.Name, messageName, StringComparison.OrdinalIgnoreCase) == 0 &amp;amp;&amp;amp; m.IsEnabled);

}
 }

A member of my team made a good argument stating that models should only expose properties.  It is not the responsibility of the model to make these kinds of decisions.  His argument is that these methods should reside in the factory class or in some other management entity which contains business logic to make these determinations.  It is definitely a good point although I made the following counter arguments:

  1. Every class in .NET comes with 3 methods, ToString(), GetHashCode and Equals.  Therefore models are not pure in that sense.
  2. We can encapsulate minimal logic in a model to allow one make certain determinations that are inherent in the model’s definition.
  3. Encapsulating such logic in the model makes the model testable as well, otherwise we have to create entities just to wrap such logic and ensure testability.
  4. There is no such thing as a strict model without methods.  Models are simply serializable objects. They can still have methods exposed to do simply checks, return data based on their internal state and just do simple validation.
  5. This validation code can be re-used.
  6. OO means a piece of data should contain properties and methods.

One advantage I see with not having these methods in model is simply a matter of purity. Otherwise, I do not see a realistic reason why models should not expose methods.

What are your thoughts?

 

Effect of Redis cluster master/slave Reconfiguration

Something, possibly a network connection or cluster failure happened, requiring the Redis cluster to switch around the masters. The default port for Redis cluster masters is 6379. However, after the switch, Redis masters where listening on port 6380.

All our connection strings pointing to the Redis cluster do not explicitly specify a port, which means our services are all trying to publish and subscribe to Redis masters on port 6379, which no longer were there after a port switch.

This information was obtained by connecting to a Redis node and executing the info command.

C:\dev\tools\redis>redis-cli -h 1.1.2.3
1.1.2.3:6379> info

  1. Server
    redis_version:3.0.7
    redis_git_sha1:00000000
    redis_git_dirty:0
    redis_build_id:46ce43dec62732a2
    redis_mode:cluster
    os:Linux 2.6.32-642.1.1.el6.x86_64 x86_64
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.7
    process_id:30733
    run_id:fa7f530a07abfeae43f45a45e6f5ec03fa738864
    tcp_port:6379
    uptime_in_seconds:5263766
    uptime_in_days:60
    hz:10
    lru_clock:12363344
    config_file:/etc/redis/6379/6379.conf
  1. Clients
    connected_clients:52
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
  1. Memory
    used_memory:2397912
    used_memory_human:2.29M
    used_memory_rss:9166848
    used_memory_peak:4133760
    used_memory_peak_human:3.94M
    used_memory_lua:36864
    mem_fragmentation_ratio:3.82
    mem_allocator:jemalloc-3.6.0
  1. Persistence
    loading:0
    rdb_changes_since_last_save:0
    rdb_bgsave_in_progress:0
    rdb_last_save_time:1471980906
    rdb_last_bgsave_status:ok
    rdb_last_bgsave_time_sec:0
    rdb_current_bgsave_time_sec:-1
    aof_enabled:1
    aof_rewrite_in_progress:0
    aof_rewrite_scheduled:0
    aof_last_rewrite_time_sec:0
    aof_current_rewrite_time_sec:-1
    aof_last_bgrewrite_status:ok
    aof_last_write_status:ok
    aof_current_size:80890
    aof_base_size:1568
    aof_pending_rewrite:0
    aof_buffer_length:0
    aof_rewrite_buffer_length:0
    aof_pending_bio_fsync:0
    aof_delayed_fsync:0
  1. Stats
    total_connections_received:712
    total_commands_processed:6673948
    instantaneous_ops_per_sec:1
    total_net_input_bytes:265947707
    total_net_output_bytes:327706638
    instantaneous_input_kbps:0.05
    instantaneous_output_kbps:0.31
    rejected_connections:0
    sync_full:1
    sync_partial_ok:0
    sync_partial_err:0
    expired_keys:0
    evicted_keys:0
    keyspace_hits:27
    keyspace_misses:0
    pubsub_channels:7
    pubsub_patterns:0
    latest_fork_usec:53252
    migrate_cached_sockets:0
  1. Replication
    role:slave
    master_host:1.1.2.4
    master_port:6380
    master_link_status:up
    master_last_io_seconds_ago:4
    master_sync_in_progress:0
    slave_repl_offset:8171386
    slave_priority:100
    slave_read_only:1
    connected_slaves:0
    master_repl_offset:0
    repl_backlog_active:0
    repl_backlog_size:1048576
    repl_backlog_first_byte_offset:6439780
    repl_backlog_histlen:1048576
  1. CPU
    used_cpu_sys:3938.52
    used_cpu_user:2750.70
    used_cpu_sys_children:4.43
    used_cpu_user_children:0.37
  1. Cluster
    cluster_enabled:1
  1. Keyspace
    db0:keys=12,expires=0,avg_ttl=0

This information indicates that this node, which was previously believed to be a master has now been relegated to a slave node, For pub-sub in Redis to work, the connection strings should specify either the exact ip and ports to the Redis master nodes or all the ip addresses and ports of all the nodes in the Redis cluster.

This problem was manifested as failure of Redis to recognize a subscription to a channel when the the appropriate client started.  This client subscribes to a Redis channel during startup.  However, while monitoring activities on all Redis nodes using the “monitor” command, it was observed that when the client is restarted, there was no subscription being registered to Redis for the channel. Also, when the internal RESTful services published a message a Redis, this activity was also not being recorded while monitoring the three “master” nodes in the cluster.
This is with the original conneciton strings specifying IP addresses of the three Redis boxes without ports as follows:

<add name=”redis” connectionString=”1.1.2.3,1.1.2.4,1.1.2.5″ />
After running the Redis info command and determining that there were no masters listening on the default port of 6379, and explicitly specifying the port on which the masters were listening to, all services were able to establish a connection with Redis.

So, here’s an interim solution which works until we come up with a comprehensive strategy:

All Redis connection strings should include all the nodes (master and slaves) with explicit specification of ip addresses and ports. For example, these settings as configured in the RESTFul and WebSocket services look like this:

<add name=”redis” connectionString=”1.1.2.3:6379,1.1.2.4:6379,1.1.2.5:6379,1.1.2.3:6380,1.1.2.4:6380,1.1.2.5:6380″ />

While researching into this, it was also discovered that Redis does provide a channel called “__Booksleeve_MasterChanged”, which provides a change notification when master configuration changes. Clients can subscribe to messages on this channel to determine a cluster topology change and act accordingly. The list of available channels currently open on a Redis node can be retrieved using command “pubsub channels”.

 

A refresher on .NET Binding Redirect

What exactly does this line in the .csproj mean?

<Reference Include="Newtonsoft.Json, Version=4.5.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL">
 <HintPath>..\..\packages\Newtonsoft.Json.1.0.2\lib\4.0\Newtonsoft.Json.dll</HintPath>
 <Private>True</Private>
 </Reference>

First, let break down each line at a time:

<Reference Include=

<Reference Include=   is an XML tag denoting an assembly reference into a project.

The text within the include tag        Newtonsoft.Json, Version=4.5.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed, processorArchitecture=MSIL    is the fully qualified name of the .NET assembly.

<HintPath>  denotes a location where Visual Studio will first attempt to look for the referenced DLL before starting to look in its probing paths.

As you can see, there are two file versions here, 1.0.2 and 4.5.0.0 which typically lead to confusion.  However, when having a discussion around assemblies, it is best to stick with the one used in the fully qualified name, which in this case is 4.5.0.0.  This is the version we use in the rest of this article.

So, why does all of this matter?

Different projects in a solution can use different versions of a DLL.  For example, one project could rely v4.5.0.0 of Newtonsoft.Json while another one could rely on v6.0.0.0. However, when both projects are built to formulate the solution package, which one will be used, if both have the correct version of these assemblies in their hint paths?

As we all know, or we should, there cannot be two DLLs with the same name within a folder.  So when our solution is deployed, there will be only one Newtonsoft.Json in the installation folder.  The version deployed, will depend on the last project that was built and its output copied into the installation folder for our application.

But, what if we deployed v4.5.0.0 of the assembly?  What would the assembly which relied on 6.0.0.0 of this assembly do, when it has to resolve its types?

 

This is where binding redirect for .NET comes in.  In simple terms binding redirect instructs the .NET runtime on what version of an assembly to use if it cannot find the one that was specified in the assembly manifest.  This is a configuration in app.config or web.config as typically looks like this:

<runtime>
<dependentAssembly>
 <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
 <bindingRedirect oldVersion="0.0.0.0-8.0.0.0" newVersion="4.5.0.0" />
 </dependentAssembly>
 </assemblyBinding>
 </runtime>

Basically, we are saying, her .NET runtime, if you are attempting to resolve an assembly called Newtonsoft.Json which has any version from 0.0.0.0 to 8.0.0.0, please look for an use Newtonsoft.Json version 4.5.0.0.

This will all work like magic but you better be sure Newtonsoft.Json 4.5.0.0 deployed alongside with your application and that this version is fully compatible with all versions within the range 0.0.0.0 to 8.0.0.0.

Happing Coding.

Git failed to lock refs/head/branch

This one threw me off but here is how you would typically run into such a problem.  It is especially true if you are mostly using Git via command line.

Assumptions:

  1. You have a remote branch located at feature/INT-4765-bad-things-predictor
  2. You are using git via command line.
  3. You are working on Windows.

So here is what you do:

  1. You check out the repository by typing
  2. git checkout feature/int-4765-bad-things-predictor.
  3. You make some changes to this new local repository.
  4. Then you run a the git add and commit commands to stage the changes readying them to be pushed to the remote.
  5. Then you attempt to push the changes and you are welcomed with an error which resembles this:

remote: error: failed to lock refs/heads/feature/int-4765-bad-things-predictor  To http://knji@stash.pidac.corp:7990/scm/int/home-reno-mentor.git
! [remote rejected] feature/int-4765-bad-things-predictor -> feature/ INT-4765-bad-things-predictor
error: failed to push some refs to ‘http://knji@stash.pidac.corp:7990/scm/int/home-reno-mentor.git&#8217;

This error does not say much but thanks to Google, the first link presented if you search for “git failed to lock ref” is this  StackOverflow listing. Basically this is an overly convoluted (if you will) message just to tell me remote repository cannot be found.

The problem here lies with Git on Windows’ case sensitivity.  A branch named feature/INT-4765-bad-things-predictor with Git implementation on Windows is apparently not the same as feature/int-4765-bad-things-predictor.  Hint: case sensitivity.

To fix this problem, I did two things:

  1. Navigated to the actual branch path on my local machine and capitalized the name to match that on the remote box.
  2. Re-ran git checkout with the branch name, exactly as it appears on remote, respecting case such as  git checkout   feature/ INT-4765-bad-things-predictor
  3. This should resolve this issue and allow you to push your changes to the remote.

 

Happy Coding.

IIS application unable to connect to SQL Server using Integrated Security

If running an IIS application configured to access the database using Windows Integrated Security, ensure that you have configured the application pool identity to use your windows login user name.

Example:

My machine name is knjiT450.  I have an IIS application configured to use Windows Integrated Security to access SQL Server as follows:

<add name=”p_funding” connectionString=”Data Source=servername.companyname.corp;Initial Catalog=test_funding;Integrated Security=True”  providerName=”System.Data.SqlClient” />

When running the application, an exception is thrown with the following error:

Login failed for user ‘CORP-DOMAIN\KNJIT450$’.

 

What is with the $ sign?  That threw me off.  Also, knjiT450 is definitely not my domain username.  Instead, this is my machine name under which IIS is running.

It turns out that application is running under default application pool identity, which users the default built-in-account of ApplicationPoolIdentity.  It appears as though the username for this account is the corp-domain\machine-name.   Changed this to use custom account, setting the username to my full domain username i.e corp-domain\knji and using my Windows password and that did the trick.