Visual Studio is not updating my NuGet Packages – what to

You have projects relying on NuGet packages.  Some of these NuGet packages have been updated and you know this.  However, for one reason or another, perhaps the NuGet packages do not increment their version for each new release, Visual Studio does not observe the packages have changed and, therefore does not update them.  What do you do?

Quick and dirty solution is to just delete these respective NuGet packages from these two places and then build your solution again.

  • Local NuGet repository: on my Windows 8 machine it is found here:   C:\Users\my-username\.nuget\packages
  • Local NuGet cache for the packages in this solution.  This folder is typically found in the root of the solution.

Do this and you are good to go.

Happy Coding.

 

 

Advertisements

Unable to start debugging on a web server. Could not start ASP.NET debugging.

If you attempt to debug an ASP.NET web application and get the following error, you can Google it and find a lot of hits.  There are also numerous posts on StackOverflow for this error but interestingly none of the answers in these posts helped for me.

What worked for me was simple.

The application was being deployed in the DefaultApp pool with the following properties:

  • .NET CLR version = 4.0
  • Managed Pipeline = Integrated
  • Identity = domain/my-user-name

I changed the target application pool to one called .NET 4.5 with the following properties:

  • .NET CLR version = 4.0
  • Managed Pipeline = Integrated
  • Identity = ApplicationPoolIdentity

With this change, I was able to run the application in debug mode.

Happy coding.

 

 

Be on lookout for StackExchange Redis ConnectionMultiplexer ConnectionFailed event misfires

We are using the StackExchange.Redis ConnectionMultiplexer class to manage our connections to Redis.  Without clear guidance from the documentation, we have attempted to create our own retry strategy with this client with code which looks like this:

 

  public RedisClient()
 : this(GetConnectionString())
 {
 }
 
 public RedisClient(string connectionString)
 {
 _logger = LogServiceProvider.Instance.GetLogger(GetType());
 _connectionString = connectionString;
 MaxRetryAttempts = 10;
 DelayBeforeRetryInMillisecs = 500;
 InitializeConnection();
 }

 private void InitializeConnection()
 {
 _logger.Info("Initializing a connection to the Redis cluster. ");
 bool isReconnectionAttempt = false;

 if (_connectionMultiplexer != null)
 {
 Debug.WriteLine("disposing " + _connectionMultiplexer.GetHashCode());
 _connectionMultiplexer.ConnectionFailed -= HandleConnectionFailedEvent;

 // test this change.....
 _connectionMultiplexer.Close(false);
 isReconnectionAttempt = true;
 _logger.Info("This is reconnection attempt to the Redis cluster.");
 }

 _connectionMultiplexer = ConnectionMultiplexer.Connect(_connectionString);
 _needConnect = !_connectionMultiplexer.IsConnected;
 _connectionMultiplexer.ConnectionFailed += HandleConnectionFailedEvent;

 Debug.WriteLine("created " + _connectionMultiplexer.GetHashCode());

 if (!_needConnect && isReconnectionAttempt)
 {
 _logger.Info("Reconnection to the Redis cluster was succeeded.");
 RaiseRedisConnectionReestablished();
 }

 if (!_needConnect)
 {
 _logger.Info("Connection is successfully established to the Redis cluster.");
 }
 else
 {
 _logger.Error("Cannot establish a connection to the Redis cluster. ");
 }
 }

 private void HandleConnectionFailedEvent(object sender, ConnectionFailedEventArgs args)
 {
 // we could potentially receive multiple of these events so we need to be be careful
 Debug.WriteLine( "received connection failure event from " + sender.GetHashCode());
 Debug.WriteLine(" multiplexer id is " + _connectionMultiplexer.GetHashCode());

 // There's a known issue with the Redis ConnectionMultiplexer which prevents if from 
 // completely releasing an event handler even after it has been disconnected and disposed.
 // Se we need the following line of code.

 if (sender.GetHashCode() != _connectionMultiplexer.GetHashCode())
 return;

 _logger.Error("Connection to the Redis cluster has failed with the following event arg:", args.Exception);

 RaiseRedisConnectionFailed(args.Exception);
 AttemptReconnection(1);
 }

 private void AttemptReconnection(int trial = 1)
 {
 if (trial == MaxRetryAttempts)
 {
 _logger.Info("Have attempted to re-connect to the Redis cluster 3 times. Aborting.");
 
 return;
 }

 if (_connectionMultiplexer.IsConnected)
 {
 _logger.Info("Connetion to Redis cluster is no longer required.");
 return;
 }

 _logger.InfoFormat("Attempting reconnection attempt {0} to the Redis cluster.", trial);
 RedisReconnectionAttempt(this, new RedisReconnectionAttemptEventArgs(trial));

 // wait for a few seconds and then try to reconnect.....
 Thread.Sleep(DelayBeforeRetryInMillisecs);
 InitializeConnection();

 trial = trial + 1;
 TotalNumberOfReconnectionAttempts = TotalNumberOfReconnectionAttempts + 1;
 AttemptReconnection(trial);
 
 }

When a network drop is simulated, the ConnectionFailed event is fired as expected. When this happens, an attempt is made to dispose of the current instance of the ConnectionMultiplexer object and create a new one. We do this to avoid the situation where attempting to access the current ConnectoinMultiplexer instance throws an exception indicating the object has already been disposed.

So we dispose the current instance, or atleast we think we do and create a new one. Yet somehow, the original instance which experienced the network drop, but is now disposed, still manages to fire ConnectionFailed events even though we are no longer supposed to be listerning.  After all, we unsubscribed to the event as indicated in the InitializeConnection method above.

I could not get an answer from StackOverflow. I also could not determine the cause just by looking at the source code for this ConnectionMultiplexer class.  What I could do, however, is put in a small hack to ensure these multiple noisy events are ignored, unless they are coming from the correct ConnectionMultiplexer instance.

I even have test to ensure my hack works.

Happy coding.

HTTP 301: The requested resource SHOULD be accessed through returned URI in Location Header.

Ever attempted to test a public API using, what you think is the correct credentials and query parameters only to get the above error?

For example, I tried the SumoLogic Search Jobs API with what I thought was the correct credentials and query parameters and kept getting this response:

{
 "status" : 301,
 "id" : "OLXXO-HINM6-3BXX7",
 "code" : "moved",
 "message" : "The requested resource SHOULD be accessed through returned URI in Location Header."
}

I even passed in an invalid URL and received the same response with a different “id”.

Quick solution to this problem is to run the curl command with the -v (verbose) option and it will spit out the location you are supposed to target as shown below:

< Cache-control: no-cache="set-cookie"
< Content-Type: application/json; charset=ISO-8859-1
< Date: Sat, 05 Nov 2016 03:59:46 GMT
< Location: https://api.us2.sumologic.com/api/v1/search/jobs?query=nnnnn

In my case, I was targeting:

https://api.sumologic.com/api/v1/search/jobs

instead of

https://api.us2.sumologic.com/api/v1/search/jobs

Sprint story work break down – how do you do it?

Every agile team appears to break down the tasks involved in completing a sprint story item a little different  Also, every team  appears to have a different perspective on the Srum Definition of Done.  My perspective aligns closely with one of my previous manager’s definition:

A story is considered done, if it is shelvable, packageable and shippable.  Basically at the toss of a hat, it can be deployed to production or made available to end customers with all associated artifacts including supporting documentation.

This is the philosophy which I use as guideline when confronted with the task of splitting down sprint stories.  And why so?

Let’s start by stating the assumption that we have a story with a well written, understood and itemized set of user, technical or deployment requirements. These requirements should drive both development, test and documentation.  Or should they?

When a story is considered done, it should be possible to validate that each requirement was met, including existence of a shippable component to end customers, whoever they may be.   Such that at the end of the sprint, the team collectively as a whole, including PM and additional stakeholders should be able to validate that each of the requirements listed in the story is appropriately captured in the resulting artifact.

There are various distinct developmental tasks involved in taking a marketable idea from concept to market.  These tasks include:

  • Design : captures some high level design activity including UX work for UI related stories.
  • Development : everyone knows what this is all about.  Yes, this is the task which captures all coding effort.
  • QA: everyone also should know what this is about.   Someone has to validate the output from the Development tasks to ensure it does meet user requirements specified in the story. An interesting point of contention I have run into involves the source document from which QA should author test cases.  One school of thought says test cases should be driven from user requirements in the story and another says  from the output of the design task. This alone is an interesting topic on its own.
  • Deployment : this task captures work such as creating of chef scripts or other activities dedicated to ensuring code makes it from the build machine into our production servers.  Some companies use DevOps engineers for this.  Others get the same developers to do it all.  Again another interesting topic on its own.
  • Documentation: almost a task which never gets the time of day, especially if product is targeted to internal customers.

On one team, we had a hard and fast rule stating that EVERY story should be broken down into each of the aforementioned subtasks.  However, as with a lot of things in life such hard and fast rules do not apply and often times completely break down, forcing the team into a “process oriented as opposed to goal oriented mindset” as one of my team members  succinctly puts it.  I see these tasks as mere guidelines. Completing some stories will require all of these subtasks while others will not and it is up to the Scrum master in consultation with the team to use wise judgement to make this decision.

Should these distinct activities be captured by individual JIRA subtasks?  I personally think so.  Individual subtasks allow distinct teams to start the work in parrallel, enabling early engagement by all respective teams, possibly allowing for faster delivery of the feature.  Of course, the assumption here is we have distinct teams responsible for each facet of the development workflow.  If we have a single developer responsible for orchestrating each of the aforementioned phases, it is no necessary to explicitly break down the story into such tasks even though it may be worth capturing the effort required during each stage of the workflow.

What do you other Scrum Masters do?

Should models have simple methods

I recently ran into an interesting conversation with some members of my team.

We have a class to define configuration of a communication infrastructure.  While utilizing an instance of this class with a factory,  we decided to create several methods in the model to allow us perform simple checks, such as IsChannelEnabled as illustrated below:


public class ClientCommunicationConfiguration
 {

public string Version { get; set; }
 public string Description { get; set; }
 public bool IsEnabled { get; set; }
 public List<ChannelConfiguration> Channels { get; set; }

public bool HasChannels()
 {
 return Channels != null && Channels.Any();
 }

public bool IsChannelEnabled(string channelName)
 {
 if (!HasChannels())
 return false;

return Channels.Any(c => string.Compare(c.Name, channelName,StringComparison.OrdinalIgnoreCase) == 0 &amp;amp;&amp;amp; c.IsEnabled);
 }

public bool IsMethodEnabled(string messageName)
 {
 return Channels
 .Where(c => c.HasMessages() && c.IsEnabled)
 .SelectMany(c => c.Messages)
 .Any(m => string.Compare(m.Name, messageName, StringComparison.OrdinalIgnoreCase) == 0 &amp;amp;&amp;amp; m.IsEnabled);

}
 }

A member of my team made a good argument stating that models should only expose properties.  It is not the responsibility of the model to make these kinds of decisions.  His argument is that these methods should reside in the factory class or in some other management entity which contains business logic to make these determinations.  It is definitely a good point although I made the following counter arguments:

  1. Every class in .NET comes with 3 methods, ToString(), GetHashCode and Equals.  Therefore models are not pure in that sense.
  2. We can encapsulate minimal logic in a model to allow one make certain determinations that are inherent in the model’s definition.
  3. Encapsulating such logic in the model makes the model testable as well, otherwise we have to create entities just to wrap such logic and ensure testability.
  4. There is no such thing as a strict model without methods.  Models are simply serializable objects. They can still have methods exposed to do simply checks, return data based on their internal state and just do simple validation.
  5. This validation code can be re-used.
  6. OO means a piece of data should contain properties and methods.

One advantage I see with not having these methods in model is simply a matter of purity. Otherwise, I do not see a realistic reason why models should not expose methods.

What are your thoughts?

 

Effect of Redis cluster master/slave Reconfiguration

Something, possibly a network connection or cluster failure happened, requiring the Redis cluster to switch around the masters. The default port for Redis cluster masters is 6379. However, after the switch, Redis masters where listening on port 6380.

All our connection strings pointing to the Redis cluster do not explicitly specify a port, which means our services are all trying to publish and subscribe to Redis masters on port 6379, which no longer were there after a port switch.

This information was obtained by connecting to a Redis node and executing the info command.

C:\dev\tools\redis>redis-cli -h 1.1.2.3
1.1.2.3:6379> info

  1. Server
    redis_version:3.0.7
    redis_git_sha1:00000000
    redis_git_dirty:0
    redis_build_id:46ce43dec62732a2
    redis_mode:cluster
    os:Linux 2.6.32-642.1.1.el6.x86_64 x86_64
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.7
    process_id:30733
    run_id:fa7f530a07abfeae43f45a45e6f5ec03fa738864
    tcp_port:6379
    uptime_in_seconds:5263766
    uptime_in_days:60
    hz:10
    lru_clock:12363344
    config_file:/etc/redis/6379/6379.conf
  1. Clients
    connected_clients:52
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
  1. Memory
    used_memory:2397912
    used_memory_human:2.29M
    used_memory_rss:9166848
    used_memory_peak:4133760
    used_memory_peak_human:3.94M
    used_memory_lua:36864
    mem_fragmentation_ratio:3.82
    mem_allocator:jemalloc-3.6.0
  1. Persistence
    loading:0
    rdb_changes_since_last_save:0
    rdb_bgsave_in_progress:0
    rdb_last_save_time:1471980906
    rdb_last_bgsave_status:ok
    rdb_last_bgsave_time_sec:0
    rdb_current_bgsave_time_sec:-1
    aof_enabled:1
    aof_rewrite_in_progress:0
    aof_rewrite_scheduled:0
    aof_last_rewrite_time_sec:0
    aof_current_rewrite_time_sec:-1
    aof_last_bgrewrite_status:ok
    aof_last_write_status:ok
    aof_current_size:80890
    aof_base_size:1568
    aof_pending_rewrite:0
    aof_buffer_length:0
    aof_rewrite_buffer_length:0
    aof_pending_bio_fsync:0
    aof_delayed_fsync:0
  1. Stats
    total_connections_received:712
    total_commands_processed:6673948
    instantaneous_ops_per_sec:1
    total_net_input_bytes:265947707
    total_net_output_bytes:327706638
    instantaneous_input_kbps:0.05
    instantaneous_output_kbps:0.31
    rejected_connections:0
    sync_full:1
    sync_partial_ok:0
    sync_partial_err:0
    expired_keys:0
    evicted_keys:0
    keyspace_hits:27
    keyspace_misses:0
    pubsub_channels:7
    pubsub_patterns:0
    latest_fork_usec:53252
    migrate_cached_sockets:0
  1. Replication
    role:slave
    master_host:1.1.2.4
    master_port:6380
    master_link_status:up
    master_last_io_seconds_ago:4
    master_sync_in_progress:0
    slave_repl_offset:8171386
    slave_priority:100
    slave_read_only:1
    connected_slaves:0
    master_repl_offset:0
    repl_backlog_active:0
    repl_backlog_size:1048576
    repl_backlog_first_byte_offset:6439780
    repl_backlog_histlen:1048576
  1. CPU
    used_cpu_sys:3938.52
    used_cpu_user:2750.70
    used_cpu_sys_children:4.43
    used_cpu_user_children:0.37
  1. Cluster
    cluster_enabled:1
  1. Keyspace
    db0:keys=12,expires=0,avg_ttl=0

This information indicates that this node, which was previously believed to be a master has now been relegated to a slave node, For pub-sub in Redis to work, the connection strings should specify either the exact ip and ports to the Redis master nodes or all the ip addresses and ports of all the nodes in the Redis cluster.

This problem was manifested as failure of Redis to recognize a subscription to a channel when the the appropriate client started.  This client subscribes to a Redis channel during startup.  However, while monitoring activities on all Redis nodes using the “monitor” command, it was observed that when the client is restarted, there was no subscription being registered to Redis for the channel. Also, when the internal RESTful services published a message a Redis, this activity was also not being recorded while monitoring the three “master” nodes in the cluster.
This is with the original conneciton strings specifying IP addresses of the three Redis boxes without ports as follows:

<add name=”redis” connectionString=”1.1.2.3,1.1.2.4,1.1.2.5″ />
After running the Redis info command and determining that there were no masters listening on the default port of 6379, and explicitly specifying the port on which the masters were listening to, all services were able to establish a connection with Redis.

So, here’s an interim solution which works until we come up with a comprehensive strategy:

All Redis connection strings should include all the nodes (master and slaves) with explicit specification of ip addresses and ports. For example, these settings as configured in the RESTFul and WebSocket services look like this:

<add name=”redis” connectionString=”1.1.2.3:6379,1.1.2.4:6379,1.1.2.5:6379,1.1.2.3:6380,1.1.2.4:6380,1.1.2.5:6380″ />

While researching into this, it was also discovered that Redis does provide a channel called “__Booksleeve_MasterChanged”, which provides a change notification when master configuration changes. Clients can subscribe to messages on this channel to determine a cluster topology change and act accordingly. The list of available channels currently open on a Redis node can be retrieved using command “pubsub channels”.