Working on Dispersed Teams – Part 3 of 7

What Remotees Want You to Know

Remote workers are an integral part of the workforce, but it can be easy to forget them. I can vouch that sometimes a remotee will pipe up from the conference room speakerphone, and we exchanged looks of shock and panic through stifled giggles because we forgot they were on the line. This isn’t a testament to a remotees’ value; they’re key players we need. Being out of sight can naturally let people fall out of mind. “If they’re at a disadvantage, why don’t they just speak up?” you ask. Some of a remotee’s visibility is within his or her control, but the rest is determined by the centralized team. It’s not a fair arrangement, but it can be fixed.

Continue reading “Working on Dispersed Teams – Part 3 of 7”

Share

OpenJDK and Containers

What can be done to help the OpenJDK JVM play well in the world of Linux Containers?
I thought I’d start tackling this issue by answering some frequently asked questions:

Why is it when I specify -Xmx=1g my JVM uses up more memory than 1gb of memory?

Specifying -Xmx=1g is telling the JVM to allocate a 1gb heap. It’s not telling the JVM to limit its entire memory usage to 1gb. There are card tables, code caches, and all sorts of other off heap data structures. The parameter you use to specify total memory usage is -XX:MaxRAM. Be aware that with -XX:MaxRam=500m your heap will be approximately 250mb.

Continue reading “OpenJDK and Containers”

Share

Working on Dispersed Teams – Part 2 of 7

Managers, How to Lead Your Dispersed Team

Since a dispersed team has a number of challenges, managing a dispersed team requires some unique tactics. I interviewed two highly respected Red Hat managers to explore good global leadership. Paul Frields is a software engineering manager who manages a global team and happens to works remotely himself. Deborah Curtis has led a variety of facilities groups, including the solitary office managers of Red Hat’s North American offices.

Continue reading “Working on Dispersed Teams – Part 2 of 7”

Share

Offload your database data into an in-memory data grid for fast processing made easy

An in-memory data grid is a distributed data management platform for application data that:

  • Uses memory (RAM) to store information for very fast, low-latency response time, and very high throughput.
  • Keeps copies of that information synchronized across multiple servers for continuous availability, information reliability, and linear scalability.
  • Can be used as distributed cache, NoSQL database, event broker, compute grid, and Apache Spark data store.

The technical advantages of an in-memory data grid (IMDGs) provide business benefits in the form of faster decision-making, greater productivity, and improved customer engagement and experience.

Continue reading “Offload your database data into an in-memory data grid for fast processing made easy”

Share

Service Workers in the Browser (Not Clerks The Movie)

As long as there are humans on earth, places to travel to, and mobile devices in their hands, the need to be able to view content offline will remain, and the APIs created to tackle these issues will continue to progress along with the demand. The newest script hoping to support offline experiences and put the control in the hands of the developer is the Service Worker API and, luckily for developers, this API found solutions to most of the issues regarding its predecessor, AppCache, in the process.

Continue reading “Service Workers in the Browser (Not Clerks The Movie)”

Share