Become a Developer Student Club Lead

Posted by Erica Hanson, Global Program Lead, Developer Student Clubs Calling all student developers: If you’re someone who wants to lead, is passionate about technology, loves problem-solving, and is driven to give back to your community, then What are Developer Student Clubs?Developer Student Clubs (DSC) are university based community groups for students interested in Google developer technologies. With programs that meet in person and online, students from all undergraduate and graduate programs with an interest in growing as a developer are welcome. By joining a DSC, students grow their knowledge in a peer-to-peer learning environment and build solutions for local businesses and their community. Why should I join?- Grow your skills as a developer with training content from Google. – Think of your own project, then lead a team of your peers to scale it. – Build prototypes and solutions for local problems. – Participate in a global developer competition. – Receive access to select Google events and conferences. – Gain valuable experienceIs there a Developer Student Club near me? Developer Student Clubs are now in 68+ countries with 860+ groups. Find a club near you or learn how to start your own, here.When do I need to submit the interest form?You may express interest through the form until May 15th, 11:59pm PST. Get started here. Make sure to learn more about our program criteria.Our DSC Leads are working on meaningful projects around the world. Watch this video of how one lead worked to protect her community from dangerous floods in Indonesia. Similarly, read this story of how another lead helped modernize healthcare in Uganda. We’re looking forward to welcoming a new group of leads to Developer Student Clubs. Have a friend who you think is a good fit? Pass this article along. Wishing all developer students the best on the path towards building great products and community. Submit interest form here.*Developer Student Clubs are student-led independent organizations, and their presence does not indicate a relationship between Google and the students’ universities.

Stay at home tips I compiled when I had to work from home and consolidated over the past two weeks

Hi everyone, I put together a blog post to share some tips based on my past experience working from home which I consolidated over the past two weeks since the UK lockdown began. I believe it can be helpful so sharing it here: https://www.codingnagger.com/2020/04/02/stay-home-tips-and-stuff-for-we-the-people/ Here’s the TLDR if you don’t have time to read it: Take regular breaks while working from home Exercise at least enough to compensate for the commute you can’t do anymore Try new things, I ordered a guitar and started taking online lessons. I link to a resource to an association has a goal of helping people with their creativity and also to some free learning resources Do your best to hear voices other than yours by turning up the radio or calling people These and a few more help me maintain my brain health and productivity through work. Hope you’ll find something helpful in there. submitted by /u/iamnguele [link] [comments]

A guide to DevOps testing tools

The BlazeMeter Continuous Testing Platform is a complete solution for shift-left continuous testing. The platform includes UI functional testing, user experience testing, API testing and monitoring, performance testing, and virtual services. All capabilities are deeply integrated in an intuitive workflow designed for agile teams and provide robust support for popular open source tools. Delivered in SaaS with support for multiple clouds or private cloud, it is a powerful tool for delivering innovation with quality and speed. 
Mobile Labs: The company’s patented GigaFox is offered on-premises or hosted, and solves mobile device sharing and management challenges that arise during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server with custom tools provides “instant on” Appium test automation. GigaFox enables scheduling, collaboration, user management, security, mobile DevOps, and continuous automated testing for mobility teams spread across the globe and can connect cloud devices to an industry-leading number of third-party tools such as XCode, Android Studio, and many commercial test automation tools. 
Cantata from QA Systems is a certified standards compliant automated unit and integration testing tool for embedded C/C++ code. Highly automated test case generation, code coverage, static metrics and requirements tracing are supplemented by architectural analysis and test status management with Test Architect and Team Reporting add-ons. Cantata is integrated with an extensive set of development toolchains, from cross-compilers and debuggers to ALM and continuous integration tools.
Quali’s CloudShell Colony helps organizations streamline effective application testing by providing development and testing teams with self-service access to automated test environments while delivering security, governance, and cost control. By removing error-prone manual inefficiencies and conflict-ridden static test environments, it creates a solid foundation for Continuous Testing and DevOps. Founded in 2007, Quali helps businesses accelerate innovation, improve quality, and control costs with on-demand access to automated application and infrastructure environment provisioning across any cloud.
RELATED CONTENT:Creating a clear testing path to DevOps takeoffHow do you help test in DevOps?
BMC AMI DevOps for Db2 accelerates the delivery of new and updated applications to the market. It comes with out-of-the box integration with Jenkins, an application development orchestration tool. This provides the power to automatically research database schema change requirements, streamline the review and approval process, and safely implement the database schema changes, making development and operations teams more agile. 
Cobalt.io is modernizing penetration testing by building hacker-like testing into development cycles. Pentests are performed by a global team of vetted, highly-skilled professionals with deep domain expertise. Cobalt.io offers the first of its kind, find-to-fix workflow that allows software companies to find and remediate vulnerabilities across an application portfolio, giving modern agile development teams the ability to do fast and frequent pentests and making development and security operations seamless with its integrations.
Eggplant enables companies to view their technology through the eyes of their users. The continuous, intelligent approach tests the end-to-end customer experience and investigates every possible user journey, providing unparalleled test coverage essential to DevOps success. Our technology taps AI and machine learning to test any technology on any device, operating system, or browser at any layer, from the UI to APIs to the database. 
GitLab helps delivery teams fully embrace continuous integration to automate building, packaging, and testing their code. GitLab’s industry-leading CI capabilities enable automated testing, Static Application Security Testing, Dynamic Application Security testing, and code quality analysis to provide fast feedback to developers and testers. With pipelines that support concurrent testing and parallel execution, teams get insight into every push, allowing them to deliver higher quality code faster.
HCL: AppScan is an automated application security testing and management tool. The company recently released version 10 of the solution, which features on securing DevOps. New features here include interactive application security testing capabilities that go beyond SAST, DAST and SCA; out-of-the-box integrations with DevOps toolchains, and a new plugin to help developers identify vulnerabilities in their dev environments. 
HPE Software’s automated testing solutions simplify software testing within fastmoving agile teams and for Continuous Integration scenarios. Integrated with DevOps tools and ALM solutions, HPE automated testing solutions keep quality at the center of today’s modern applications and hybrid infrastructures. 
IBM: Continuous Testing provides an end-to-end picture of how products react to new code. It does this early in the development lifecycle which gives Product teams confidence to push incremental code changes more frequently. IBM’s cloud-native Continuous Testing platform Rational Test Automation Server along with market leading Rational Test Workbench and Virtualization Server empowers teams to achieve this over a wide range of scenarios covering mobile, cloud, cognitive, mainframe and more with minimal coding
Micro Focus: Minimize risk and maximize user satisfaction by testing early, often, and at scale with Micro Focus’ industry-leading, integrated portfolio for continuous and comprehensive testing of web, mobile, and enterprise applications. With extensive technology support and AI-driven capabilities, you can test complex load, stress and performance scenarios, and implement resilient functional test automation throughout your entire DevOps pipeline. Our tools provide an end-to-end view of quality, with specific, actionable and timely feedback on your applications’ readiness status.
OverOps is a continuous reliability solution that helps companies prevent critical errors that are missed by testing and static analysis. Using OverOps teams can quickly identify and resolve critical software issues. Unlike static code, log analyzers and APMs that rely on foresight, OverOps analyzes your code at runtime to produce specialized data that tells when, where and why code breaks. OverOps runs in the cloud or on-premises with robust CI/CD integrations to ensure software reliability from testing into production. To learn more about why global organizations trust OverpOps visit www.overops.com.   
Perfecto: Perfecto offers a cloud-based continuous testing platform that takes mobile and web testing to the next level. It features a: continuous quality lab with smart self-healing capabilities; test authoring, management, validations and debugging of even advanced and hard-to-test businesses scenarios; text execution simulations; and smart analysis. For mobile testing, users can test against more than 3,000 real devices, and web developers can boost their test portfolio with cross-browser testing in the cloud
Progress: Telerik Test Studio enables QA and SDET professionals to create functional, performance and load tests that work immediately. Patent-pending multi-sense discovery eliminates broken tests and technical debt that plague other testing solutions
QASymphony’s qTest is a Test Case Management solution that integrates with popular development tools. QASymphony offers qTest eXplorer for teams doing exploratory testing. 
Sauce Labs: With more than 3 billion tests run and counting, the Sauce Labs Continuous Testing Cloud is the only continuous testing platform that delivers a 360-degree view of your customers’ application experience. It ensures web and mobile applications look, function, and perform exactly as they should on every browser, OS, and device, every single time.
ShiftLeft Inspect is a next-generation static code analysis solution, purpose-built to insert security into developer workflows without slowing them down. It accomplishes this by scanning code as fast as the pull request or build with the accuracy required to share directly with developers, without manual triage. Its coverage extends beyond technical vulnerabilities to business logic flaws, data leakage, hard-coded literals and insider threats.  
At SmartBear, we focus on your one priority that never changes: quality. We know delivering quality software over and over is complicated. So our tools are built to streamline your process while seamlessly working with all the tools you use – and will use. Whether it’s Swagger, Cucumber, ReadyAPI, Zephyr, TestComplete, or more, our tools are easy to try, easy to buy, and easy to integrate. 
Testlio is the leader in managed app testing. With robust client services, a global network of validated testers, and a comprehensive software platform, we provide a suite of flexible, scalable, and on-demand testing solutions. When apps must perform brilliantly, Testlio helps ensure world-class customer experiences. In any location. On any device. In any language.
The post A guide to DevOps testing tools appeared first on SD Times.

KubeMQ achieves Red Hat OpenShift Operator Certification

The Kubernetes message queue and message broker solution provider KubeMQ has announced its Kubernetes Operator is now Red Hat OpenShift Operator certified. 
The OpenShift Operator Certification is meant to give users confidence when building next-generation projects on Red Hat’s Kuberentes and containers app platform OpenShift. With the certification, users will be able to deploy KubeMQ through the Red Hat OpenShift Operator catalog.
‘KubeMQ is a Kubernetes message queue broker, enterprise-grade, scalable, highly available and more secure. Helping enterprises to build stable microservices solutions that can be easily scaled as well as enabling additional microservices to be quickly developed and added to the solution,” wrote in a post. 
KubeMQ is Kubernetes native, easy to deploy, provides enterprise-grade assurance and is available on all messaging patterns. 
“We are proud to deliver a Red Hat OpenShift Certified Operator. It is an important milestone for KubeMQ as it contributes to earning industry recognition as a qualified enterprise solution. The KubeMQ Operator will provide enterprises with simple and robust access to our Kubernetes native message queue,” said Gil Eyal, KubeMQ’s CEO.
The post KubeMQ achieves Red Hat OpenShift Operator Certification appeared first on SD Times.

SD Times news digest: Quick Base Sandbox, Google Pay’s Business Console, and XML Spy

Quick Base has announced a new way for business professionals to work with IT and test low-code applications. The new Sandbox capability enables cross-functional teams to quickly create and optimize business-critical applications without risking disruption.
Sandbox provides a place to easily collaborate with IT when making changes to new and existing workflows, while giving IT departments stronger governance through tighter control over the development process, according to the company. 
Additional details are available here.
Google Pay introduces Business ConsoleGoogle Pay’s Business Console is a new tool that streamlines the process of integrating Google Pay into apps and websites.
Users will also be able to discover resources, get support at different stages throughout their integration, and keep track of progress along the way, according to the companty.
“And this is only the beginning. As we add new features, the Business Console will be your go-to place to manage all your new and existing integrations with Google Pay, see how your integrations perform over time, and add support for other business- and developer-focused products,” Google wrote in a blog post.
Collibra raises $112.5 million for data-driven decisionsCollibra raised $112.5 million to further its Data Intelligence solution aimed at improving the quality of business decisions driven by data. The company’s total venture funding is now $345.5 million. 
Collibra’s suite of products helps organizations address a breadth of business challenges, including data privacy and protection, compliance and risk mitigation, operational efficiency and cost reduction, according to the company. 
Additional details are available here.
UiPath expands RPA certification programUiPath extended its training platform and certification program to accelerate workforce readiness. 
The new courses include RPA Associate, which is 8the foundational certification level for all RPA job roles and RPA Advanced Developer as a second certification level. 
“UiPath is committed to investing in the workforce of the future through its UiPath Academy, which is focused on training and reskilling for the jobs of today and the jobs of tomorrow,” UiPath wrote in a post.
New tools for XML and JSON editingThe latest release of XML Spy adds several user-requested features for working with JSON and XML, as well as new functionality for debugging XPath, updated standards and database support, and more.
The new auto-backup feature in XMLSpy is great for recovering files in the event of a software or hardware crash.
Additional details are available here.
The post SD Times news digest: Quick Base Sandbox, Google Pay’s Business Console, and XML Spy appeared first on SD Times.

How do you help test in DevOps?

Shamim Ahmed, CTO for DevOps Solutions at Broadcom, a global technology company:The promise of DevOps is that we could deliver more, faster, with no sacrifice in quality. In reality – we see some common blocks to DevOps success. At Broadcom, we address those challenges: we help eliminate the testing bottleneck and bring teams together in a single platform that lets everyone work the way they want to work. Agile teams want to work in their IDEs and command lines. They want to use open source, and they want tools that are seamlessly embedded into the CI/CD pipeline. Traditional testers want to use a UI, and features like scriptless testing. 
Broadcom makes this simple with BlazeMeter Continuous Testing Platform, a single application that delivers all the functionality you need to make continuous testing a reality. BlazeMeter Continuous Testing Platform is designed for every team across the SDLC. It can be used “as code” in the IDE or with the easy UI. All teams can share assets and align around common metrics and AI-driven insights. AI is also used to optimize test cycles, predict defects and highlight areas for continuous improvement. 
RELATED CONTENT: Creating a clear testing path to DevOps takeoff
Most organizations know that DevOps success depends on the ability to shift left and right, and deliver new capabilities with volume and velocity. BlazeMeter really helps them do that – all the way from aligning the business and dev around model-based requirements to using data from production to drive continuous improvement. And best of all – we make it easy. It’s literally click to start and there’s a free version so you can get started today. 
Dan McFall, CEO of Mobile Labs, an enterprise mobile app testing companyFor Mobile Labs, we really tackle the problem of mobile devices as enterprise infrastructure. What that means is answering the questions of: Where are my devices? Who has them? What state are they in? What is on them? What application versions are loaded? What can they see? All of the things you need to basically have mobile devices be available at the development and test environment. We solve that problem, and then make them essentially act just like virtual machines. You can call them via API layers. You can build a seamless, headless process around our infrastructure component into your DevOps process. You can have a broad and deep testing space that gives you the confidence that you have covered your bases. 
We are also looking into more scripting as well, such as low code or no code scripting environments, more behavioral-driven environments. We are seeing that a lot of people are resource challenged, and don’t have folks who can write mobile automation. We are going to make it easier for people to do mobile automation from a scripting perspective this year. 
Those are the areas where we are continuing to help, which is just the right people with the right skills with the access to the right environments at the right time. That is going to be a really key aspect to having a successful DevOps strategy.  
Matt Davis, managing director for QA Systems, a software quality companyQA Systems helps DevOps engineers overcome the challenges of test automation and tool integration by focusing on repeatable steps and command line interfaces. Not everything in testing can be automated. However, by removing tedious manual steps from the process, we help engineers focus on building the right tests and solving problems.
Automating checks on software quality metrics, architectural relationships, hierarchy and dependencies in your code, ensures that you don’t deviate from your intended design or your code become less maintainable as it evolves. Combining automatic test case generation, integrated code coverage, a change based test build system, plugging testing gaps automatically and linking your tests directly to your requirements, engineers can now access unprecedented test capabilities. Code level analysis and testing should be at the heart of DevOps, where developers can use them efficiently every time code is checked in. QA Systems have found that fully automating these capabilities on the basis of open standards and integrated solutions, significantly enhances the functionality of the verification CI/CD pipeline.
Maya Ber Lerner, CTO of Quali, a cloud automation and digital transformation companyTest automation is great, but it only solves one part of the DevOps testing problem. To ensure the quality of your application, your developers and testers need instant access to dynamic, production-like environments throughout the value-stream to develop applications and run automated tests effectively. However, time-consuming, error-prone manual processes for setting-up and tearing down these environments creates a huge bottleneck—leading to multiple teams struggling to share static environments, or skirting around ITOps and implementing shadow-IT practices, which can greatly drive up costs and bypass security best practices.
Environment as a Service solutions, like Quali’s CloudShell Colony, make it possible for developers and testers to gain immediate access to dynamic, production-like environments on-demand with one click, or automatically by connecting your CI/CD tools to accelerate the value stream. We even have a customer that set up a Slack-bot to provision environment requests.
With CloudShell Colony, you can bridge the gap between Dev, Sec, and ITOps leveraging the speed of self-service, automated set-up and tear-down of dynamic environments across the value stream coupled with policy-based configurations ensuring security, compliance, infrastructure utilization, and costs control all from one tool.
The post How do you help test in DevOps? appeared first on SD Times.

Detect git Directory with Bash

One interesting aspect of working at Mozilla is that Firefox lives in a mercurial repository while several other projects live on GitHub in a git repository. While most focus on either Firefox or another project, I switch between both, leaving me running git commands inside the mercurial repository and hg commands inside git repos. It’s a frustration that I’ve lived with for a while so I sought to find a unified way of completing common tasks.

The first step was learning to detect git from command line:

if git rev-parse –git-dir > /dev/null 2>&1; then
# git repo!
else
# NOT a git repo!
fi

The if statement above detects a git repository, the else means the current directory is not inside a git repo.

One frequent task is checking out master and pulling the latest code from upstream, so I create an alias to do just that:

master() {
if git rev-parse –git-dir > /dev/null 2>&1; then
git checkout master && git pull upstream master
else
hg pull && hg checkout “last(public())”
fi
}

This alias will save me time and frustration moving forward, and I’m sure I’ll find other aliases to create based on git detection!
The post Detect git Directory with Bash appeared first on David Walsh Blog.

Why Do Some HTML Elements Become Deprecated?

The internet has been around for a long while, and over time we’ve changed the way we think about web design. Many old techniques and ways of doing things have gotten phased out as newer and better alternatives have been created, and we say that they have been deprecated.

Deprecated. It’s a word we use and see often. But have you stopped to think about what it means in practice? What are some examples of deprecated web elements, and why don’t we use them any more?

What is deprecation?

In everyday English, to “deprecate” something is to express disapproval of it. For example, you might be inclined to deprecate a news story you don’t like.

When we’re speaking in a technical sense, however, deprecation is the discouragement of use for an old feature. Often, the old feature remains functional in the interests of backward compatibility (so legacy projects don’t break). In essence, this means that you can technically still do things the legacy way. It’ll probably still work, but maybe it’s better to use the new way. 

Another common scenario is when technical elements get deprecated as a prelude to their future removal (which we sometimes call “sunsetting” a feature). This provides everybody time to transition from the old way of working to the new system before the transition happens. If you follow WordPress at all, they recently did this with their radically new Gutenberg editor. They shipped it, but kept an option available to revert to the “classic” editor so users could take time to transition. Someday, the “classic” editor will likely be removed, leaving Gutenberg as the only option for editing posts. In other words, WordPress is sunsetting the “classic” editor.

That’s merely one example. We can also look at HTML features that were once essential staples but became deprecated at some point in time.

Why do HTML elements get deprecated?

Over the years, our way of thinking about HTML has evolved. Originally, it was an all-purpose markup language for displaying and styling content online.

Over time, as external stylesheets became more of a thing, it began to make more sense to think about web development differently — as a separation of concerns where HTML defines the content of a page, and CSS handles the presentation of it.

This separation of style and content brings numerous benefits:

Avoiding duplication: Repeating code for every instance of red-colored text on a page is unwieldy and inefficient when you can have a single CSS class to handle all of it at once. Ease of management: With all of the presentation controlled from a central stylesheet, you can make site-wide changes with little effort.Readability: When viewing a website’s source, it’s a lot easier to understand the code that has been neatly abstracted into separate files for content and style. Caching: The vast majority of websites have consistent styling across all pages, so why make the browser download those style definitions again and again? Putting the presentation code in a dedicated stylesheet allows for caching and reuse to save bandwidth. Developer specialization: Big website projects may have multiple designers and developers working on them, each with their individual areas of expertise. Allowing a CSS specialist to work on their part of the project in their own separate files can be a lot easier for everybody involved. User options: Separating styling from content can allow the developer to easily offer display options to the end user (the increasingly popular ‘night mode’ is a good example of this) or different display modes for accessibility. Responsiveness and device independence: separating the code for content and visual presentation makes it much easier to build websites that display in very different ways on different screen resolutions.However, in the early days of HTML there was a fair amount of markup designed to control the look of the page right alongside the content. You might see code like this: 

<center><font face=”verdana” color=”#2400D3″>Hello world!</font></center>

…all of which is now deprecated due to the aforementioned separation of concerns. 

Which HTML elements are now deprecated?

As of the release of HTML5, use of the following elements is discouraged:

<acronym> (use <abbr> instead)<applet> (use <object>)<basefont> (use CSS font properties, like font-size, font-family, etc.)<big> (use CSS font-size)<center> (use CSS text-align)<dir> (use <ul>)<font> (use CSS font properties)<frame> (use <iframe>)<frameset> (not needed any more)<isindex> (not needed any more)<noframes> (not needed any more)<s> (use text-decoration: line-through in CSS)<strike> (use text-decoration: line-through in CSS)<tt> (use <code>)There is also a long list of deprecated attributes, including many elements that continue to be otherwise valid (such as the align attribute used by many elements). The W3C has the full list of deprecated attributes.

Why don’t we use table for layouts any more?

Before CSS became widespread, it was common to see website layouts constructed with the <table> element. While the <table> element is not deprecated, using them for layout is strongly discouraged. In fact, pretty much all HTML table attributes that were used for layouts have been deprecated, such as cellpadding, bgcolor and width. 

At one time, tables seemed to be a pretty good way to lay out a web page. We could make rows and columns any size we wanted, meaning we could put everything inside. Headers, navigation, footers… you name it!

That would create a lot of website code that looked like this:

<table border=”0″ cellpadding=”0″ cellspacing=”0″ width=”720″>
<tr>
<td colspan=”10″><img name=”logobar” src=”logobar.jpg” width=”720″ height=”69″ border=”0″ alt=”Logo”></td>
</tr>
<tr>
<td rowspan=”2″ colspan=”5″><img name=”something” src=”something.jpg” width=”495″ height=”19″ border=”0″ alt=”A picture of something”></td>
<td>Blah blah blah!</td>
<td colspan=”3″>
<tr>
<!– and so on –>
</table>

There are numerous problems with this approach:

Complicated layouts often end up with tables nested inside other tables, which creates a headache-inducing mess of code. Just look at the source of any email newsletter.Accessibility is problematic, as screen readers tend to get befuddled by the overuse of tables.Tables are slow to render, as the browser waits for the entire table to download before showing it on the screen.Responsible and mobile-friendly layouts are very difficult to create with a table-based layout. We still have not found a silver bullet for responsive tables (though many clever ideas exist).Continuing the theme of separating content and presentation, CSS is a much more efficient way to create the visual layout without cluttering the code of the main HTML document. 

So, when should we use<table>? Actual tabular data, of course! If you need to display a list of baseball scores, statistics or anything else in that vein, <table> is your friend. 

Why do we still use <b> and <i> tags?

“Hang on just a moment,” you might say. “How come bold and italic HTML tags are still considered OK? Aren’t those forms of visual styling that ought to be handled with CSS?”

It’s a good question, and one that seems difficult to answer when we consider that other tags like <center> and <s> are deprecated. What’s going on here?

The short and simple answer is that <b> and <i> would probably have been deprecated if they weren’t so widespread and useful. CSS alternatives seem somewhat unwieldy by comparison:

<style>
.emphasis { font-weight:bold }
</style>

This is a <span class=”emphasis”>bold</span> word!

This is a <span style=”font-weight:bold”>bold</span> word!

This is a <b>bold</b> word!

The long answer is that these tags have now been assigned some semantic meaning, giving them value beyond pure visual presentation and allowing designers to use them to confer additional information about the text they contain.

This is important because it helps screen readers and search crawlers better understand the purpose of the content wrapped in these tags. We might italicize a word for several reasons, like adding emphasis, invoking the title of a creative work, referring to a scientific name, and so on. How does a screen reader know whether to place spoken emphasis on the word or not?

<b> and <i>have companions, including <strong>, <em> and <cite>. Together, these tags make the meaning context of text clearer:

<b> is for drawing attention to text without giving it any additional importance. It’s used when we want to draw attention to something without changing the inflection of the text when it is read by a screen reader or without adding any additional weight or meaning to the content for search engines.<strong> is a lot like <b> but signals the importance of something. It’s the same as changing the inflection of your voice when adding emphasis on a certain word.<i> italicizes text without given it any additional meaning or emphasis. It’s perfect for writing out something that is normally italicized, like the scientific name of an animal.<em> is like <i> in that it italicizes text, but it provides adds additional emphasis (hence the tag name) without adding more importance in context. (‘I’m sure I didn’t forget to feed the cat’). <cite> is what we use to refer to the title of a creative work, say a movie like The Silence of the Lambs. This way, text is styled but doesn’t affect the way the sentence would be read aloud. In general, the rule is that <b> and <i> are to be used only as a last resort if you can’t find anything more appropriate for your needs. This semantic meaning allows <b> and <i> to continue to have a place in our modern array of HTML elements and survive the deprecation that has befallen other, similar style tags.

On a related note, <u> — the underline tag — was at one time deprecated, but has since been restored in HTML5 because it has some semantic uses (such as annotating spelling errors).

There are many other HTML elements that might lend styling to content, but primarily serve to provide semantic meaning to content. Mandy Michael has an excellent write-up that covers those and how they can be used (and even combined!) to make the most semantic markup possible.

Undead HTML attributes

Some deprecated elements are still in widespread use around the web today. After all, they still work — they’re just discouraged.

This is sometimes because word hasn’t gotten around that that thing you’ve been using for ages isn’t actually the way it’s done any more. Other times, it’s due to folks who don’t see a compelling reason to change from doing something that works perfectly well. Hey, CSS-Tricks still uses the teletype element for certain reasons.

One such undead HTML relic is the align attribute in otherwise valid tags, especially images. You may see <img> tags with a border attribute, although that attribute has long been deprecated. CSS, of course, is the preferred and modern method for that kind of styling presentation.

Staying up to date with deprecation is key for any web developer. Making sure your code follows the current recommendations while avoiding legacy elements is an essential best practice. It not only ensures that your site will continue to work in the long run, but that it will play nicely with the web of the future.

Questions? Post a comment! You can also find me over at Angle Studios where I work.
The post Why Do Some HTML Elements Become Deprecated? appeared first on CSS-Tricks.

Rethinking Code Comments

Justin Duke asks if treating code comments like footnotes could help us understand the code in a file better. In his mockup, all the comments are hidden by default and require a click to reveal:

What a neat idea! Justin’s design reminds me of the way that Instapaper treated inline footnotes.

Instapaper (circa 2012)I guess the reason I like this idea so much is that a lot of comments don’t need to be read constantly, — they’re sort of a reminder that, “Hey, this needs work in the future” or “Yikes, this is weird and I’m sorry.” Keeping these comments out of the code makes it much easier to scan the whole file, too.

I do wonder if there could be a toggle that shows every comment, just in case you need to read all the comments in sequence rather than clicking to toggle each one.

Anyway, all this talk about comments reminds me of an absolutely fantastic talk by Sarah Drasner at JSConf this year where she discussed why comments are so dang hard to get right:

Direct Link to Article — PermalinkThe post Rethinking Code Comments appeared first on CSS-Tricks.

Arduino To Host Virtual Conference to Battle Covid-19 Today (April 2, 2020)

Arduino, maker of credit-card sized computers that are easily adapted in Internet-of-Things (IoT) fashion to standalone analog or digital devices, has put out a call to all developers, researchers, doctors, academics, IT pros and other innovators to join a virtual conference today (April 2, 2020) on battling Covid-19 with Arduinos. For example, building Arduino-based ventilators. 

Creating a clear testing path to DevOps takeoff

DevOps has transformed the way businesses think and software development teams work, but the power of DevOps is still limited.
According to Shamim Ahmed, CTO for DevOps solutions at the global technology company Broadcom, testing still stands in the way of achieving true DevOps and continuous delivery. Testing is a time-consuming process that requires many moving parts to happen in the right way, he explained.
Testing in general is just more complex, according to Maya Ber Lerner, CTO of Quali, a cloud automation and digital transformation company. “Testing is not as lightweight as development. You need to have you test automation in place, you need to have your applications in place, third-party components in place, and you need to have the right infrastructure and data set in place. Each one of those things can easily fail a test,” she said. 
In addition, there are just a number of different questions and scenarios you have to ask yourself when it comes to testing, Matt Davis, managing director for QA Systems, a software quality company said. “Testing, of course, is at the heart of something that you develop. You test it to determine whether it’s going to be released, but that testing can be on the basis of what should I test? Should I test everything? Should I be looking at impact analysis and change-based testing? Should I be looking at auto-test case generation?” he said. And these are just a small set of questions and thinking that is required for testing.
Luckily, there are some ways teams can start to break down the barriers of testing in DevOps:
Don’t treat testing like a phase in the life cycle.
Despite efforts to test early and to test often, Broadcom’s Ahmed said testing is still looked at as a particular phase in the life cycle, “when in fact it should be continuous and embedded throughout the entire life cycle,” he said. 
To do this, testing needs to shift left and shift right. “The more testing, especially around automation, that you can build into the application from day one is key,” said Dan McFall, CEO of Mobile Labs, an enterprise mobile app testing company. McFall explained techniques like test- driven development or behavior-driven development help developers become more involved in testing and really taking the time to look at the features and solutions, how those are going to be tested, and how those tests are going to validate business requirements. 
Shifting right enables developers to work better with operations to understand what is going on in production and take advantage of that to improve tests and test conditions, according to Ahmed. 
Acquire the right skills.
“As we shift left and shift right with continuous testing, we need to start to bring in additional skills to the traditional QA testing mindset,” said Ahmed. “For example, with shift left you need to bring in more developer skills, and developers need to be more adept to the concepts of continuous test automation and how to integrate automation into the development process through continuous integration. The mindset also has to shift because quality is now everyone’s responsibility. 
One trend Ahmed is seeing is the introduction of the software development engineer in test, which are individuals who participate in development and also know testing techniques like white box testing. “These guys are able to participate with developers for example in code reviews, understanding what is going to be built, and participating in the technical debt of the code,” he said. 
In addition to skills, it is equally as important that DevOps teams actually understand what is happening such as the assessment of code, how tests are impacted, how tests are built and rerun, how to automatically generate tests, according to Davis. “People need to understand the main criteria. Can it be automated? Is it an open platform? Can you integrate it with different things? The critical challenge is does it deliver, is it accessible and is there support for it,” he said. 
Make testing automated and continuous. 
“DevOps as a process doesn’t work unless it’s automated and continuous,” said QA Systems’ Davis. He explained that in order to shift from doing nightly builds to continuously integrating testing throughout the entire life cycle, things need to be 100% automated. “You can’t do things continuously if you have to manually intervene,” he noted. “You need to be able to trigger things and react to activities or react to results and outcomes all the way along your pipeline. 
One of the biggest door openers for automation is the connection on a pipeline, according to Davis. “There is a major advantage to being able to set up a series of quality gates and triggers, and automating different types of activities throughout the testing and development life cycle,” he said. “The more that can be automated in these pipeline stages via decorative pipeline scripts, the better it can be.” 
Test environments should also be automated, according to Quali’s Ber Lerner. “If you have a testing organization that’s trying to write tests real fast, and they are automated tests, but it takes two weeks or even three days to set up a test environment, are you really Agile?” she asked. Additionally, test environments should be set up in a way that separates environment issues from test issues so there is no confusion. 
Broadcom’s Ahmed explained another way to speed up testing is through model-based testing, a technique where tests are automatically generated by models or a description of the system’s behavior. According to Ahmed, model-based testing helps auto generate test assets on the fly to improve test productivity and make sure all the assets are available as soon as the requirements are ready.  
Impact analysis can also be used to speed up testing. Impact analysis, or change-based testing, helps run only the tests that were impacted by code changes, limiting the amount of tests that are actually run, according to Davis. 
“Quality counts for the bottom line. The way you make that bottom line more efficient is through automation and integration,” said Davis. “If software doesn’t work as well as it should, or the quality level is just not there, companies are going to lose ground to their competitors.”
Leverage manual testing.
Despite the increased need to automate things, there is still a need for manual testing in DevOps. Automated test assets free up testers to be able to do more value added pieces of testing such as testing from a real-user customer experience perspective or doing exploratory testing on a new piece of functionality, according to Broadcom’s Ahmed.
He explained that not everything can be automated and testing things that have a human element to it is hard to automate because they can be very subjective to tests. Testers need to be able to go manually into features to actually evaluate the quality. 
“You want to free up testers to go find your edge cases and try to break them,” said Mobile Labs’ McFall. “There are a lot of contextual things automated testing won’t cover.” 
For example, he explained from a user experience perspective, if something like a drop-down menu or search field doesn’t work the way it should, test automation won’t be able to catch that. It will only tell you it is there, but it can’t tell you if it isn’t user friendly. Additionally, things like splash displays that can cover some of the application’s functionality won’t be detected through test automation. Again, it can detect it is there, but it can’t tell if the object is accessible and interactive, McFall explained. 
Additionally, McFall believes it is always good to have a person available to validate the test cases before they are automated just to make sure it is actually worthy of automation. 
Automate feedback.
According to QA Systems’ Davis, there needs to be a way to share information, results and test analysis in a timely manner. 
Even if testing is being done manually, that doesn’t mean the feedback loop can’t be automated. 
Mobile Labs’ McFall explained  as you manually test things like user experience and capture interactions, you can send your findings back to a tool in an automated fashion such as a real user monitoring system or customer experience system.
“I am running through a manual test case, but what is automated is the interaction of the manual verification with something like my ticketing system so that I know in Jira, it is done, passed and automatically sending back logs and other types of information. That to me is how you can have manual testing in a DevOps environment. The feedback loop itself is automated,” said McFall.
Monitor.
There are a number of monitoring tools that can help implement DevOps, according to Broadcom’s Ahmed. For instance, you can track the number of defects getting leaked across the life cycle to see where the leaks are coming from and how to fix them. 
Risks associated with the release can also be looked at from multiple angles: “Even if you develop fast and automate everything, there is still a question on whether you are going to introduce risk or break something when you go into production,” said Ahmed. Monitoring techniques can provide insight into how much test coverage has been achieved, the quality of those tests, if you did the right testing, if dependencies were testing, and if you are able to identify and remove any security challenges. 
Similarly to starting testing earlier in the life cycle, monitoring can also shift left to get developers and testers involved, according to Ahmed. This will help teams be proactive on how the app will behave once it is deployed and what the monitors and alerts actually mean. 
Don’t treat all applications the same.
For instance, mobile devices and applications are different from browsers and the web architecture. Mobile devices such as computers are very fragile, according to McFall, and there are many challenges around it. You need to understand the programmatic pieces around mobile apps as they relate to their infrastructure and environment. 
“People expect the mobile web to be similar to a desk web architecture. Things can look similar, but under the hood they are very different. It is hard to automate that if you are trying to use common frameworks,” said McFall. 
Enable self-service environments.
According to Quali’s Ber Lerner, another main barrier of DevOps is infrastructure provisioning and application provisioning. “Making sure that everyone gets access to the infrastructure and the applications they need, whether it is people trying to run tests or systems that are trying to do automated tasks becomes a big bottleneck,” she explained. In order to overcome this, Ber Lerner explained teams need access to cloud-agnostic environments, which they can get through self service portals APIs or different plugins. “It gives people self-service access to their test DevOps environments while it is still possible for IT Ops to govern the way that it is done. It makes it possible for people to be fast but still be in control.” 
“At the end of the day, it is about breaking down the barriers you have within the four walls of your organization and making sure people have access to the environments they need, systems they need, and tools they need to be successful,” said Mobile Labs’ McFall.  “Let’s make sure we focus on testing the right things, and the more important things.”
The post Creating a clear testing path to DevOps takeoff appeared first on SD Times.

X- ITM

Cloud Computing - Consultancy - Development - Reverse Engineering

Nested Environments - High Availability Services

 

X-ITM UK X-ITM France X-ITM Russia X-ITM USA X-ITM HK X-ITM Netherlands X-ITM Australia
Providers of Private point to point  World Wide VPN encrypted networks Providers of Private  World Wide Communications  with 16 digits dial codes Providers of World Wide Cloud Services Hosted on Underground Facilities Providers of  Support and Consultancy Services to Infrastructures and Installations
Please contact us for other services or options,

*X-ITM is entitled to terminate any user in breach our terms and  services

 

X- ITM