The post Configuring your .Net Core applications appeared first on Config.
]]>Previously we had the Web.Config and App.Config files in our .Net Framework applications, which were actually XML files, where we could put the settings of our applications, and together with them, we used the ConfigurationManager class to access them.
Often these files became real monsters and their maintenance traumatizing to many developers.
With the launch of .Net Core and ASP.Net Core a few years ago, we gained a new, more powerful, flexible, and simple configuration engine.
In addition to JSON files like the famous appsettings.json, the new .Net Core configuration API allows us to have more flexibility when choosing the source of our settings. We can have several such as:
In addition to the sources mentioned above, you can create your own configuration provider to read other data sources using the IConfigurationSource and IConfigurationProvider interfaces.
As in the old Web / App.Config, our list of configurations consists of a key/value set, and can be distributed in hierarchically organized, environment-separated files.
For example, we can have one file for each specific environment in our development flow.
What defines which environment to run is the ASPNETCORE_ENVIRONMENT (environment variable). The value of this variable is what defines which environment we are running.
In the development environment, on the developer machine, it can be found in the launchSettings.json file, or in the properties of your project. On a system running in the production environment, this environment variable must be registered in the operating system.
Typically, values that do not change can be stored in the appsettings.json file, and in the other files you can only put the values that change according to the environment in which they are running because they are overwritten in memory during the execution of your application.
It is in the ConfigurationBuilder class where it begins to happen. It is available in the Microsoft.Extensions.Configuration Nuget package.
First of all, we must create a new instance of ConfigurationBuilder, informing which providers we want to use as configuration source.
Many developers are unable to access the appsettings.json configuration file in a console-like application in .Net Core; this is due to the fact that the code snippet above is not done automatically in this type of application – you must enter it manually.
In ASP.Net Core 2.x applications there is no need to insert this code snippet as this is already done automatically by calling the WebHost.CreateDefaultBuilder method, which you can see in the Program class of your application, and then an instance of IConfiguration is passed in the Startup class constructor.
In previous versions of ASP.Net Core, this configuration was done directly in the constructor of the Startup class.
As you might imagine, we should use the keys described in our configuration files to read their values. For this, we can use the IConfiguration interface mentioned earlier, which is available in the Nuget package Microsoft.Extensions.Configuration.Abstractions.
It can be injected into your class through the constructor, and with that we can read the values through its GetValue <T> methods, by entering the key from which we want to get the value.
Values are always stored as a string by default, however, please note that I enter the appropriate data type to return the value so that the appropriate conversion is done. In this case, if you are reading numeric values, you can make the call as configuration.GetValue (“key”), for example.
In addition to the configuration values stored in files, you can read the environment variables of your operating system.
The post Configuring your .Net Core applications appeared first on Config.
]]>The post The importance of version control in Agile Development appeared first on Config.
]]>Version control systems, commonly used in software development, are tools whose purpose is to manage the different versions of documents produced throughout the development process.
Ian Sommerville quotes in Software Engineering, the importance of a good versioning system of source code:
“To support version management, you should always use version management tools (sometimes called version control systems or source code control systems).
These tools identify, store, and control access to the different versions of components. There are many different version management systems available, including widely used open source systems such as CVS and Subversion (Pilato et al., 2004; Vesperman, 2003).”
Another version control definition was made by Roger S. Pressman:
“Version control combines procedures and tools to manage different versions of configuration objects that are created during the software process.”
Among the characteristics offered by the version control tools we will highlight in this some of those that give advantage to the projects developed through agile methods.
Version control tools allow multiple developers to work in parallel on the same files without one overwriting another’s code, making teamwork easier. All management of what has been changed in each file by each of the users is in charge of the version control tool.
This gives greater agility to the development process, since several users can work on the same functionality at the same time, since the responsibility to merge between what was done by each user in a document is done by the control tool version.
When you create a new version of a document, version control tools try to identify it only, so that this version can be retrieved at any time. In addition to assigning an identifier, version control software also stores who created the new version and the date it was created.
Optionally, when creating a new version of a document, the user can add a comment describing what has changed from the previous version of the file.
In agile projects it is common for several members of the development team to work in parallel on the same functionality. As a consequence, it is also common for them to make changes to the same documents, which can lead to conflicts between their versions.
So the ability to track what was changed in each version of the document is an advantage, since it lets you know what was changed, who changed, and when it changed.
This information helps to resolve conflicts and allows an analysis of the evolution of the document in question over time.
In the agile process of software development, one of the features is the build and deploy automated source code. This enables agility in deliveries and customer satisfaction.
However, in order for this process to be possible in a fast and secure way, it is important to have good version control of the source code that will store what is ready to be published by the automated release process.
The post The importance of version control in Agile Development appeared first on Config.
]]>The post Using unit tests to improve the testing process in agile development appeared first on Config.
]]>When we talk about automated testing, what comes first to the head? Tests that simulate the user using an application, through the UI (User Interface)? Unit testing?
In this post, we’ll show you the levels of automated testing within Agile and how to improve your company’s test automation strategy with a focus on unit testing.
In companies that are starting to work with automated testing, it is common to see the use of record-playback tools such as Selenium IDE to create test scripts validating end-to-end scenarios in an application through the UI. These tests are also called Acceptance Tests.
Tools such as these have greatly facilitated the creation of these tests, allowing even a user to record a test without requiring programming skills. Paid solutions, such as IBM, HP and Microsoft, also offer such tools.
Unfortunately, it is also common to record hundreds of tests that navigate the UI, only realizing that something is wrong when a new field is included in a screen, or when the identifier of an element is changed, for example. In the medium / long term the problems arise: the maintenance cost becomes high, the tests take a long time to run (increasing build time) and give feedback on the system, many tests fail due to false negatives, etc. With this, the team ends up losing confidence in the tests and often failing to run them.
In companies that have agile teams, unit tests (and also TDD – Test Driven Development) already help us a lot in this situation. When we have a fair amount of unit testing for a system, it becomes less necessary to automate such exhaustive testing by the UI. Drive tests are easy to maintain, very effective for testing limit values or possible combinations of deviations within the code, and run extremely fast, giving us good feedback on our system in a short time. However, by definition, we are testing isolated component behaviors, so that at one time or another we should test the integration between them, right? So, do we then create a test that navigates through the UI to validate this? No!
In his book Succeeding with Agile, Mike Cohn describes the concept of the Test Automation Pyramid.
Pyramid of Test Automation According to Cohn, an efficient strategy of automated testing should contemplate tests at three levels: Unit, Service (Integration) and UI. At the base of the pyramid, we have a lot of unit tests, which should be the basis of a good automated testing strategy. At the top, a small amount of UI tests, just to avoid the problems we discussed earlier. In between, we have a fair amount of service testing, which can also be called integration tests, API tests, etc. Cohn, in the article The Forgotten Layer of the Automation Test Pyramid, comments on the importance of this level of testing and its role in filling the gap between unit and UI.
As we could see, the basis of the test pyramid are unit tests! The TDD is the practice to develop the unit tests.
TDD is a “programming practice” that results in a suite of unit tests, where such tests are a side effect and not the goal itself .
In the application of practice of TDD is guided by three basic steps, used in the development of the unit tests, and consequently of the source code of the application:
Tests in the service layer basically test the application services, “below” the UI. This approach prevents any test, other than the unit test, from running directly through the user interface. So instead of running exhaustive tests validating all business rules across the interface, we can do tests below the UI. This type of test has been very important, since many applications nowadays have Web and Mobile interfaces (smartphone, tablet), and it is necessary to separate the interface from the logic of the application. Martin Fowler refers to these tests as Subcutaneous Tests.
Within Agile, we can do these tests to validate the criteria for accepting stories, for example. A good approach is the use of BDD (Behavior-Driven Development).
The post Using unit tests to improve the testing process in agile development appeared first on Config.
]]>The post Simplifying cross-platform development with Docker appeared first on Config.
]]>Docker is an open source project that has been helping organizations to develop and deploy their applications in a centralized and scalable architecture. The key to make this possible is the concept of containerized the applications and all its dependencies. As a result, developers can build a variety of specific environments, including exactly what they need to run their applications, even if the applications are cross-platform. In addition, this architecture is scalable, because developers and IT operators can add new resources whenever they need them, according to the software requirements that is being developed.
Cross-platform applications are software which need to work on multiple operating systems, but having only a single source code base. This is a development architecture strategy to avoid the need to have an application with many versions of source code to run on various operating systems (each one to run on specific platform).
For example, a cross-platform application may run on Microsoft Windows (on the x86 architecture), Linux (on the x86 architecture) and Mac OS (on either the PowerPC or x86-based Apple Macintosh systems). If the developer needs to create a cross-platform application, using a traditional approach, it would be necessary having at least three different environments.
It is in this complex scenario that the Docker can help developers and IT operators with a containerized development architecture, simplifying the process of develop, test, build, deploy and maintain the applications.
The one of the most benefit from Docker is the cross-platform portability. Developers who need to create new applications to run on different operating system will be benefited using Docker. The productivity will increase and a complex architecture can become simple, since all the application requirements and dependences will be available in the same container.
The main goal of the Docker for developers is allow the development of multi-platform applications in a common way of productivity that enables the development of applications using a common syntax that run the same way on various operating system, decreasing the inherent risks of a complex multi-platform architecture and its dependences.
Developing with containers is a little different from a traditional development approach, but most of the time it is not so different from the way that we used to develop traditional applications (write and run the code and the tests).
In the containerized development approach, you do not have a development environment that is persistent and constantly being evolved, even this is not a recommended strategy. Instead of having a virtual machine configured for each device where you need to run the code, you will create a lightweight virtualization layer called “container”.
Containers are created from images, using a host as a pattern to replicate the images. The images are created from templates, that have the patterns used to creating the correct environment for your application, including all dependencies.
An image includes all application needs and its dependencies. In this way, when you run a specific container, everything related to the application needs to start and run correctly will be available, avoiding wasted time (and money) to set up the correct environment. This condition saves time not only of the Development Team, but also the time of the Operations Team (DevOps approach).
For bigger dependencies, that would usually be separate processes, you can split those out into additional containers to simplify the architecture.
DevOps is a concept that are rapidly spreading throughout the IT community. It is the practice of IT operations teams and development teams to participate together in all process of the lifecycle of applications, beginning on design and development through the production deployment and support of the application.
From the above definitions of Docker listed in this article, it is evidently clear that Docker can aid in the process of DevOps. Nowadays, a lot of organizations are looking for DevOps initiative, looking for agility in TI process, and the Docker architecture has helped to approach the Development Teams and the IT Operations Teams.
The post Simplifying cross-platform development with Docker appeared first on Config.
]]>The post Continuous Delivery appeared first on Config.
]]>In the day-to-day life of a team that develops software, new features are added to the current system, or existing ones are improved. So, delivering a new version is basically improving the system.
Who usually delivers the system (and ideally) is the developer group. And often the delivery process is dominated by the development team. Let’s cite an example:
“Deployment is manual and risky, so only a certain group of more experienced people are responsible for putting the new version into the air.”
In that case, even if the company’s Business team decides a date to deliver the new version, it would not be a decision of it, but the development team, if they could carry out the deployment.
If deliveries are not very frequent, or have no frequency at all, it is even harder to decide when to deploy.
It’s quite common to freeze the current version in times of risk – during a big sales promotion. There is no predictability and confidence in what will happen when a new version is delivered.
Making frequent deliveries is even more crucial for startups, as new ideas need to be tested and evaluated quickly.
Okay, so we need to improve the delivery process so that the business team makes the decisions without relying on the development team. How do we do it?
A few years ago we started to adopt automated tests, letting the machine do what it is good, repetition, and letting humans do what they are good to exploit.
Automating the manual testing process was such a good thing that it even evolved into TDD, a technique where automated testing is done before code implementation.
And do not stop there. After automating the tests, we also decided to automate the execution of them. So every possible change of system goes through the entire suite of tests and the team receives feedback much faster.
Following this line of reasoning, automation emerges as an answer to this problem. Automating the way the software is delivered gives greater predictability and confidence to the process, making the development team no longer a dependency.
To develop a robust Continuous Delivery Process we must ensure that what is delivered will be of quality.
The Continuous Integration server runs only the local tests, but how do I know if the external services (software / hardware) are working? How to know if this code will work as expected outside of the development environment?
Extrapolating the idea of Continuous Integration, we can think of a Pipeline of Implementation (Build Pipeline).
The idea is that each piece of code, in addition to being tested by automated testing, is also implemented on an approval server. This way, not only the code is tested, but also the infrastructure and the database.
Tools that control and version the configuration of the machines and the database are very important at this point, because they make it easier to go back and check where the problems are.
In short, each generated commit will go through the entire test suite, then it will be deployed in an approval environment and integration testing will be done there, ensuring that everything runs smoothly.
Want to put the newest version of the system in the production today? Just use the last commit that went through Pipeline!
In the same way it is easy to get back the version in case of serious problems in production. Just use the last functional commit and perform a new deployment.
The post Continuous Delivery appeared first on Config.
]]>The post Config recognized with the 2018 “Great User Experience” title for collaboration tools software from FinancesOnline Directory! appeared first on Config.
]]>Config was conferred with the prestigious 2016 “Great User Experience” Award and the 2018 “Rising Star” award from FinancesOnline, a popular B2B software review platform. This recognition is given out annually to outstanding solutions for B2B companies across several categories, including the leaders in the Collaboration tools market.
After a thorough analysis of all , we received an overall review score of 8 out of 10, and a user satisfaction score of 80% at the time of this writing, Config was also honored in FinancesOnline’s new list of the Top 500 products.
In evaluating Config for the 2018 honors, FinancesOnline’s team of software experts examined and tested Config against various scenarios and parameters on its ability to enhance one’s . Some of the specific criteria that their review team considered were:
According to a FinancesOnline statement released to Config, “Experts have seen Config perform impeccably for a range of configuration file management requirements. It is an absolutely necessary product that IT team managing multiple configuration setups would require.”
We’re delighted to have a shiny new award to put on the Team’s mantle, but more importantly, the 2018 Great User Experience Award is another confirmation that our Configuration file Management system is providing the best value to IT teams.
The post Config recognized with the 2018 “Great User Experience” title for collaboration tools software from FinancesOnline Directory! appeared first on Config.
]]>The post Config file format – The LuaRocks Configuration files appeared first on Config.
]]>The post Config file format – The LuaRocks Configuration files appeared first on Config.
]]>The post How to apply the deployment configuration file by using Windows PowerShell appeared first on Config.
]]>The post How to apply the deployment configuration file by using Windows PowerShell appeared first on Config.
]]>The post Config Release 0.9-0.11 Updates appeared first on Config.
]]>Release Version 0.9.X
This release provides users easier way to upgrade and manage their paid accounts.
New Features Launched
Enhancements & Bug Fixes
Release Version 0.10.X
We have released a major feature where users are able to review changes before allowing the deployment of application configuration files. This can be set per environment. This is useful, for example, on Production environment changes that would require review and approval before it can be deployed.
New Features Launched
Enhancements & Bug Fixes
Release Version 0.11.X
This release includes the page, which allows users to manage variables used within their application configuration files.
New Features Launched
The post Config Release 0.9-0.11 Updates appeared first on Config.
]]>The post DevOps – Difference between Continuous Integration, Delivery, and Deployment appeared first on Config.
]]>In DevOps times, the terms we hear most today are:
Continuous Integration
Continuous Delivery
Continuous Deployment
Like every fashion term, we often hear distorted explanations of what each practice is and what it is for.
Well, let’s detail what each of the processes is. I will try to be objective and simple in explanation.
A frequent confusion is equating continuous delivery with continuous integration. This view is a big misconception. Continuous integration is the practice of integrating and testing a new code with the existing code base, and is a necessary condition for the continuous delivery process to happen in the right way.
It is the process of merging your new code with a branch. The idea here is to test your code as soon as possible to identify problems early.
Most of the work is done through automated testing, and this technique requires a unit testing framework. Usually there is a build server for these tests, so developers can continue the work while the tests are being run.
This is where the biggest confusion appears. People sometimes use the terms of continuous delivery and continuous deployment in turn. The two practices are not the same thing. Continuous Delivery is a set of practices that aims to ensure that the new code can be deployed in the production environment at any time, and continuous deployment takes a step further.
In the practice of continuous delivery we try to send the code to an environment, which can be DEV, STAGE or PROD, once the developer feels the code is ready to send. But in this part of the process the idea is that the code is being delivered to a user base, understood by user base, Testers / QA. This is similar to continuous integration, but we can now scale the Behavior Tests (BDD) to test business logic or even visual tests.
Does it mean that after this delivery to the first users means that it should go to production or even stage? No! It means that you are delivering to this first user base and point. This may only mean that delivery is being made to Code Review.
After all, what is the furthest point that Continuous Deployment addresses and what makes it so different from delivery practice? This part of the process addresses the automation of the publishing process in the production environment as soon as you are sure that the code has passed all the tests and is ready to be published. The idea is to deliver business value as quickly as possible and not accumulate new code on stage. That is, if the code went through the integration process that is responsible for integration tests or unit tests, and also was advancing in the delivery process with manual, visual and behavioral tests, then enters the deployment phase that is responsible for publishing the code in production in an automated way.
At this stage of the process it is assumed that all tests have been done, and only the issue of automated publication in the production environment will be addressed.
It is assumed that the production environment is stable, that everything has been tested and that this process should be as simple as pushing a button and that it should be possible for anyone to do it.
That is, for Continuous Deployment, you necessarily need to have Continuous Integration and Continuous Delivery running as part of your routine.
The post DevOps – Difference between Continuous Integration, Delivery, and Deployment appeared first on Config.
]]>