if !blogClogged

Software development and other stuff.

Must blog soon

leave a comment »

Been far too long. Must blog soon.

Advertisements

Written by Michael Ruminer

September 23, 2016 at 2:55 pm

Posted in Uncategorized

Why too much gaming development is infested with bugs

leave a comment »

Video Game Development

photo credit: brummiedave85 via photopin cc

Though I have been working with a large focus on SDLC and ALM practices as a consultant and as a practitioner I have spent no time that I can recall formally working with SDLC for gaming development. Despite being a computer geek I was never into the gaming side of computers as either a casual game player or real gamer. That changed suddenly over the past couple of years. At over 40 years old I began to play computer games. First on a console and then adopted gaming on the PC. It’s all because of my triplet children who are about to turn 8. What I have come to find when I look at the gaming world while wearing my professional hat is alarming and distressing. But I also see opportunity.

First, the alarming the and distressing part. In short, what I have seen is a what appears to be a massive lack of proper quality controls. As I have begun to delve into gaming development I have found that the obvious quality shortcomings are almost certainly coming in part from poor, limited existence or non-use of testing tools, mechanisms or processes. I have seen in the gaming development landscape a world in which many of the testing tools seem to function and integrate at a level that would be more consistent with an enterprise application development landscape of 1999 than 2014. Yes, I said 1999 – the previous century. Compared to testing tools used by enterprise developers I have thus far found that the gaming industry is in many places what I gauge to be 10 – 15 years behind. The caveat is that I am just starting to survey the wider landscape and there are some third party (not from the gaming engine developers) testing tools I have yet to crack. But if I take the Unity game engine as an example you will now find Unit Test support using Nunit and support for creating some automated Integration tests – this is a new capability that has just become available to the public in December of 2013. This means the penetration is zero for any game on the market or about to hit the market that used Unity unless it was being done entirely separately.

Without doubt testing game code is very tough. It’s an intentionally rich environment with mind boggling permutations. Testing of any sort in that environment is tough and automated testing is much tougher.  I have great sympathy for those trying to apply to gaming testing tools similar to enterprise testing tools.  The concepts of quality control don’t change between the two environments but that doesn’t mean that the same type of tools working in the same type of way will translate well. The gaming development engine creators have created marvels of technology. The game development tools have become so powerful and so rich that developers can create massive, rich, open playing worlds that can be hard to comprehend how it can be done. Because the developers can create these worlds and the gamers and business interest demand the creation of these spectacular games then the developers go and create to the very edge.  The problem…

The testing tools for gaming development have not remotely kept pace with the sophistication and complexity of the product it should be testing. The developers ability to create larger and more wondrous things also generates increasing numbers of bugs. With a cutting edge bug engine a.k.a gaming development engine but testing tools and methodologies that may be a decade behind, the number of escaped bugs is only going to rise.

Here is another example from the gaming world that as a player in the game I can determine likely reasons I see certain failing behaviors repeatedly. I’m not picking on the MMORPG Neverwinter but I will provide one of what could be numerous examples of a recurring bug. Like most MMORPGs Neverwinter has an auction house for offering goods for sale between players. It has a defective search component. Defective to this day. The search is a rather important part of finding an item when the volume of listings are very large as would be expected in an MMORPG. Currently the search is still usable for most tasks. Add to this that the search results are supposed to be sortable by  some of the columns in the results. The classic clicking the column header should re-sort the search (single column header sort only).  After one of the weekly updates the column sort no longer properly sorted. It effectively makes the ability to find the items desired impossible in many circumstance as the result set will only return 400 records no matter if there are 5000. So you can’t find the next item up for bid perhaps or the lowest buyout price, or the maximum asking price etc. Typically after the Thursday update there is a Friday emergency update to patch the things they broke in the weekly Thursday patch. Those things happen we all know how that works- especially in a large complex game. I give them a pass on patches to patches. We all have to do it from time to time. The broken sort was fixed in the patch to the update. Until the next weeks update. Then it reverted back to the broken state of the week before. The broken->fix, broken-> fix pattern went on for a few weeks, as I recall, and then a hiatus for some weeks and the it started the cycle over for a few weeks again. I cannot say for certaint looking only at the resulting screens as a player what was happening in their lifecycle but I can make a few educated guesses. First, there obviously is no regression test for that functionality despite it breaking multiple times in updates. Or there is a regression test and there is poor traceability so it is not recognized that that set of functions should be tested because it has been impacted. Or there is a regression test and it was considered too low priority and not run- even after breaking it multiple times. I can also make an educated guess that there is some poor source code control going on. Thus the defect continues to get reintroduced and not tested because of the poor traceability between the code churn and the regression test that may or may not exist.  This is another area in which it seems that often the connectivity between the game developers engine IDE workspace and version control is often not well integrated. Many times it doesn’t have to be well integrated but in defense of game development proper version control handling is often not as easily done as  in other industries. That though, is a problem and not a symptom. It’s a process and ALM issue.

I’m going to wrap this up and have more later. The point as I hope you have gathered now is that there are multiple reasons you may have seen so many big games hit the market and be loaded with bugs and fixes and updates with feature enhancements introduce even more. From what I am learning in my delve into the tooling for game development is that one of the primary reasons is just what I placed in bold text above – the testing tools and testing process maturity for gaming development are horrendously behind the curve in comparison to the technology they need to test.  The second reason I’d propose for the common scenario of released buggy games is not at all technical and is process only in that it is a business decisions on “ready for release” – it seems many organizations are not concerned with their release quality because the monetary impact is still favorable for them in delivering “released” code of high complexity and huge potential but of low quality in terms of reliability, bugs and user experience.  When they can release higher quality product and with reduced overall cost to reach that quality through better tooling and practices and we as gamers refuse to buy the product and stick it out until Update 5+, 10+ whatever, then I predict quality will rise. For the moment the development tools far outpace the testing tools and thus cost effective quality controls achieved in proper testing  will not happen. As I see it.

Written by Michael Ruminer

February 28, 2014 at 11:29 pm

Posted in Uncategorized

TFS Work Item Type Definition Field Name, RefName, Label Extract with Regex

leave a comment »

I’m no whiz at Regular Expressions (Regex) which you will see later in this entry where I post some regex commands. A number of times I have wanted to extract field information from the TFS Work Item Type Definition (WITD) that can exported as XML. Usually it is with a client and I am wanting to find a list of custom or standard field names and perhaps also a list of the same or subset of fields used in the form. I like to compare the latter with the former to see which fields may be defined but not exposed.

More times than not my WITD file layout is nicely consistent with every other time I have exported a WITD. Naturally I’d like to use an xslt or a regex to extract the data I want. Actually, I’d really like to use an xslt so I could reformat the output of the values I want into a tab delimited file so I could easy load into Excel or other table like structures. My xslt is even worse than my regex. So each time I end up with regex commands and the next time I can’t recall where I put the file that I stored the regex in for future use. So I recreate them. I think I do get better at the regex every time I have to recreate them. When you look at the regex below you’ll get an idea of how horrible I was if these are better. Maximum performance is not a concern. I’m running each  regex on demand against a handful of files.

With all that said the caveats are that do to the use of the Lookahead and Lookbehind constructs and likely some other constructs these regex will not work with every editor, regex engine and probably not even C# regex. I have not tried them in Cygwin but I believe I tried them and they failed in Windows findstr command. They did work superbly well in EditPad Pro 7. This product comes from Just Great Software which is owned by Jan Goyvaerts (blog). Just Great Software makes not only EditPad Pro but Regex Buddy, PowerGrep, Regex Magic and some other tools I am not as familiar with. ***This is just a personal comment on EditPad Pro – I get nothing for saying this***.  Since Notepad++ uses POSIX regex they most certainly don’t work there. If you are wanting to use an editor and regex you should really check out EditPad Pro. Not only does it handle regex it handles crazy regex like mine which include lookbehinds containing multiple [0-n] character references. Most engines don’t like indeterminate regex such as wildcard characters as part of a lookbehind. Other engines also usually don’t like much more than the most rudimentary regex expression within a lookbehind. The one inside EditPad Pro handled lookbehinds with a non-collecting group containing text values separated by logical or e.g. (?:Microsoft|System) and multiple wildcard character references in the same lookbehind and didn’t complain a bit.  When EditPad Pro has highlighted the matching portions of the lines you can cut the highlighted parts from the document or you can copy the results. When copying the results you get a nice list with a line of results for each line of the parsed document. The copy will paste perfectly into Excel.

Now for the regex. This is as much for me to be able to remember as it is for you to reference.

#extract control custom field names
(?<=<Control.*? FieldName=")(?!System|Microsoft)[^"]+

This regex gets the refname for each defined custom field in a work item definition. It looks for <Field (set for not case sensitive), some characters followed by name=”, some more characters followed by refname=”. All of that should come before (lookbehind) some characters that cannot be have the word System or Microsoft in it (the refname we are interested in for the line) and those characters of interest end with a “, but we don’t select the ending quotation mark. The description is not exactly how the regex engine finds the text for a good understanding check out Regular-Expressions.info. I won’t explain it correctly if I try and the explanations at Regular-Expressions.info are much better. This is also a website by Jan Goyvaerts.

The above regex can break down in any number of ways. The most notable is that I used finding the token System or Microsoft as would be used in the refname of out-of-the-box fields. If you happened to use System or Microsoft as the leading name for your refname in a custom field then it will exclude it. The fix for that is to stop using one of those words as the starting of your refname. Frankly, I can’t imagine anyone hitting that issue. Note that instead of having (?<=<Field.*?refname=”)*** ,where *** represents the regex shown just before the *** is partial and continues on, I included a name=” as part of the lookbehind. The inclusion of the name token is to differentiate the field nodes that make up the basic field definition and field nodes that may appear under state nodes, transition nodes and link column definitions. Without the name token being added in as a criteria for a match duplicate field names will appear as they are picked up from those other nodes.

#extract only custom field names

(?!.*refname="(?:System|Microsoft))(?<=<Field.*?\sname=")[^"]+

In this regex above I am extracting the field names for the custom field entries. This would be the value of the name attribute in the defining Field nodes and should match in order and in result count the custom field extract. This time to determine if the this is a potential matching node it will look at each line and do a lookforward. It will be searching for 0-n characters followed by refname=” and that followed by either System or Microsoft. The lookforward begins from the main expression not necessarily from the beginning of the line. In this instance it would look for the first quotation mark and each subsequent quotation mark. If on a quotation mark it looked forward and found it matched then it would then try the lookbehind criteria. It will see if the quotation mark is immediately preceded by some amount of characters that are immediately preceded by the name=” token preceded by a space \s preceded by 0-n characters, preceded by the text <Field. Once it can meet the look behind criteria and has already met the lookforward criteria then the text between name=” and the following quotation mark would be selected for the result set. (My standard caveats apply on my description of how it resolves the matches). The big thing to notice in this instance is that the lookforward matching expression is actually a negative criteria. (?= would indicate a look forward and a matching portion would meet the search criteria but in the case the (! indicates that if the text meets the matching criteria then the match is a success but the negative nature of the lookahead means that the lookahead failed then the match doesn’t count. In this case that is how refnames starting with System or Microsoft are excluded from the results and the lookahead allows us to check those values that come after the actual text we wish to match for a result.

One thing to note on the regex to extract custom field names. Notice in the lookbehind group (?<= that .*?\sname=” was used instead of .*?name=”  . The difference is that instead of letting the expected space just before the name attribute be matched via the wildcard character token .*? an intentional space \s was used. Without the intentional space the name=” portion would have matched the attribute refname=” . Due to the structure of the text word boundaries are not used in the matching. This allows for a search to be successful anytime it finds the correct sequence of characters together and [space]name=” matches .*?name=” as well as refname=” does. For this reason the intentional space was put in so that it could no longer match refname=”.

#extract control custom field names

(?<=<Control.*? FieldName=")(?!System|Microsoft)[^"]+

I often like to extract out of the WITD the FieldNames for the form control entries. I do this to bump up against the refnames of the fields extracted in the prior regex examples so that I can see if any custom fields are declared but not surfaced onto the form. I do this because if I find such fields it means that most likely either the field is used for integration or some backend process or that the field is not valid. In this case it’s fairly straight forward. I have a regex with a lookbehind (?<=. The lookbehind will occur when it finds a “ preceded by some text which cannot have the words System or Microsoft in them (Microsoft123 or System123 would qualify). If that criteria is found then it starts the lookbehind using FieldName and Control and an < among other potential characters in the string.

#extract control custom field labels

(?<=<Control.*?FieldName.*?Label=")(?<!FieldName="(?:System|Microsoft).+?)[^"]+

I often like to pull the custom field label used on the control as that name is often more informative to the meaning of the field than the fields name or refname. The regex shown just prior to this paragraph allows for the label to be extracted. The result order and result row count should match with the output from the regex to extract the control custom field names. In this instance the regex uses two lookbehind groups. One lookbehind is the normal positive lookbehind that is a success on a match while the other is a negative lookbehind that is a success when there is no match. In this case the negative lookbehind (?<! is trying to match to FieldName=”System or FieldName=”Microsoft plus some additional characters. If it matches it fails the lookbehind which is what allows the regex to only extract the labels for custom fields. The positive lookbehind (?<= is defining the expression that should be matched for a success by the lookbehind which when found just prior to some text followed by a “ will yield a result of the desired text not including the “ [^”]+

Without doubt these regular expressions can be improved upon. I’m lucky to get them built at all it often seems. This may be of value to some folks like myself that have a desire to extract out specific information from the TFS work item type definition file without a lot of hassle. Oh and yes there are other ways to do these things without using regex but none are likely as fun.

Written by Michael Ruminer

June 28, 2013 at 2:06 am

Beware the ReportingDataSourcePassword of TFSConfig

leave a comment »

 

 

More later… but beware the /ReportingDataSourcePassword:PASSWORD flag of TFSConfig RebuildWarehouse /all .

Written by Michael Ruminer

June 22, 2012 at 3:59 pm

Posted in TFS

List of IE Settings for Better Resilience During Coded UI Playback

leave a comment »

This is something I have just this week come across. Not sure how I lived without it but now that I have this information it’s gold to me.

Go to the Visual Studio Team Test blog at this blog entry: UITest Framework – IE Plugin – Part 2. I suggest you read the entire post but the section I am referring to is the second section and is titled similar to this post title. What you will see is a list of settings in the form of registry entries for IE to do wonderful things. Some of these I had never thought of before. Below is just a snippet of what I am talking about.

Listing the commands to suppress unnecessary dialogs to make playback more resilient

IE Setting

Batch Script Command

Turning off the Prompt that asks whether to turn on the AutoComplete REG ADD "HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\IntelliForms" /v "AskUser" /t REG_DWORD /d 0 /f
Turning IE Auto Complete off

REG ADD "HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main" /v "Use FormSuggest" /t REG_SZ /d "no" /f

 

There are are about 11 settings.

Written by Michael Ruminer

May 31, 2012 at 5:21 pm

Posted in Coded UI Testing

SharePoint 2010 Users Cannot Check Out Documents With IE

with one comment

Users Cannot Checkout Documents

There have been some occu​rrences of some users not being able to check out documents in a document library while other users are able to do so despite both having appropriate permissions and no other normal blocking situation existing. All of the mechanisms and workflows that have allowed some users to checkout and others to not be allowed are not fully known but the message that will be seen is shown in Image 1 that follows.

This document could not be checked out. You may not have permission to check out the document or it is already checked out or locked for editing by another user.This document could not be checked out. You may not have permission to check out the document or it is already checked out

Image 1​

Symptoms Experienced:

  • Using Internet Explorer the user cannot check out some or any files from a document library

  • Using Firefox or Chrome the user can check out the files from the document library

  • User receives the message shown in Image 1 when trying to check out

  • Other users may or may not be able to check out files using different mechanisms

  • The afflicted user(s) may or may not be able to check out files using a different mechanism

Discovered Resolutions:

  • Ensure the web application has a root site collection. See KB2625462

In the one experience the creation of a root site collection for the web app resolved the issue without further action but it is suspected that in some instances the root site collection may not be sufficient or that the root site collection may already exist thus not being the sole cause for the denied check out.

It is recommended that in the absence of the root site collection existence resolving the issue or in addition the root site collection that the SharePoint OpenDocument Class add-on within Internet Explorer be investigated for the proper version. To get to the Manage add-ons screen in IE click the "cog" icon in the upper right corner to get the needed menu. See Image 2.

Manage Add-ons Menu Item IE9

Image 2​

Once the Manage add-ons screen has appeared look for the SharePoint OpenDocuments Class entry and if present select it. In the lower part of the window the version will appear. See Image 3.

IE 9 Manage add-ons window

Image 3​

If Office 2010 is installed it should show the version as 14.

If it shows Version 14 and Office 2010 is installed and the user cannot check out items after the root collection exists then perform a repair on Microsoft Office.

If it shows Version 14 and Office 2010 is installed and the user can check out items then all is good.

If Office 2010 is installed and it shows a version lower than 14 then update the add-on as indicated in the next section.

If Office 2007 is installed and it shows a version lower than 14 and the user cannot check out items after the root collection exists then perform a repair on Microsoft Office.

If Office 2007 is installed and it shows a version lower than 14 and the user can check out items then all is good.​

 

​Updating the add-on

​​Close the Manage Add-ons window in IE if open and close all IE sessions.

Open up a console window with Administrator privileges. Change to the following directory:

C:\Program Files\Microsoft Office\Office14

run regsvr32.exe OWSSUPP.DLL

Restart IE and check the SharePoint OpenDocuments​ Class Add-on to see if it is version 14.

Written by Michael Ruminer

April 24, 2012 at 1:12 pm

Posted in SharePoint 2010

Tagged with ,

Dev 11 TFS and Potential Limits on the Requirements Category

with 2 comments

04/24/2012 Update: I noticed that in the Microsoft Scrum V2 Template in Dev 11 that the Product Backlog Screen has two options for PBIs – Bug and Product Backlog Item. So multiple selections are an options. Now I must dig deeper to find out why it works in Scrum V2 but not MSF Agile V6. See the image that follows:

Scrum V2 Choices


This post is pertaining to Dev 11 TFS and the use of work item categories. Not sure if this is a bug or by design. You can vote, comment or do otherwise on this reported issue on Microsoft Connect: ID 736750.

 

In the Agile V6 Template the Web Interface specifically the portion for presenting the burn down chart and also for working with the backlog to include adding items into the backlog, natively uses the User Story as the primary product backlog item. This use of User Story is to be expected and it seems the system uses the User Story in the roles for this portion of the web interface and many other parts of the system based on the category assignment of User Story in the Requirements Category. Nothing really new there.

The issue is that if one adds an additional work item type to the team project -  a requirement work item type in this particular case and then adds that requirement work item type to the Requirements Category the system will present exception text in the web interface in place of numerous parts of the system including on the backlog management page.  This is despite that the User Story type was left as the default for the Requirements Category.

It is understandable that the system might rely on the user story as the ruling product backlog item and expect it to be the default work item type for the category as it seems it is only currently able to show one work item type on the backlog screen for adding items into the backlog on the fly. But to not support additional work item types in the Requirements Category without rendering a chunk of the web interface broken is hopefully a bug versus a choice. You can see the results when adding an additional work item type in Image 1. In Image 2 is how it would look normally.

 

Multiple Items in Requirement Category

Image 1 (Click image for larger view)

 

Proper Web Interface

Image 2 (Click image for a larger view)

 

The Categories.xml file with the additional work item type loads fine into the Team Project as it conforms with the schema but the results of doing so can be seen.

I know many organizations that do not use the User Story as the definitive carrier of requirements but add back into their set of work items under Agile a Requirements work item that can augment a user story or for non-functional requirements work in place of the user story (though estimated etc. in the same way)

I propose that at the least the parts in the Agile template that rely on the Requirements Category should use the DefaultWorkItemType and ideally should be able to use all item types in the Requirements Category in many instances.

There are other reasons why this might be failing.

I intend to investigate this further to see if I can rule out some other potential causes and will report back. Perhaps it is not that two types are in the category but that I have missing from my Requirements Work Item Type some estimation field or a needed set of specific States. I will look into this further but I suspect the mere existence of two work item types in the category presents the problem based on the fact that the User Story Creation section on the backlog page issues an error. It seems the page is designed to look at the Requirements Category but is not designed to know what to do if more than one entry exists.

I’ll report back as I get the opportunity.

Written by Michael Ruminer

April 13, 2012 at 9:01 am

Posted in Dev 11 TFS

Tagged with , ,

%d bloggers like this: