NiFi: A Case Study for Disruptive Thinking

NiFi: A Case Study for Disruptive Thinking

This excerpt from my yet to be published book "Victory Horizon", from the chapter titled The Pinnacle of The Expert, explores the origin story of Apache NiFi, how it was borne out of breaking with conventional data processing wisdom, and how it's time has come again.

The Pinnacle of the Expert is the most treacherous one of all, because there are so many possible variations. Each seemingly a lonely mountain, but in reality they compose a mountain range longer than the Andes.

So we will spend a lot of time pondering the challenges of the Expert Pinnacle, so prepare yourself for a long read in this chapter.

The oracles and gurus that inhabit the Expert Pinnacle will often shout at each other across the tops of these mountains, each with their unique claims of devout righteousness. Holy wars will ensue, with neither combatant unwilling to cede the high ground or entertain the notion that they might be wrong. But while all of them are quite certain of everything they know, they are blissfully ignorant of all the things they don’t know.

As an expert, you have to ask whether your hard won knowledge is for self-serving purposes, or whether it is in order to have more impact. Just as in the Pinnacle of the Scholar, knowledge and expertise is meant to be shared and not hoarded, and building a network is key. As I said before, to be a Subject Matter Expert is not a matter of being the organization’s foremost authority on a particular topic or aspect of the business, but to be the organization’s leading student. To not be the one with all the answers, but the one asking all the right questions.

To illustrate, here's a set of stories from my personal history, the first of which is called “To C or Not To C”.

To C or Not To C

I have had the pleasure of working with many brilliant people throughout my career. One of the most brilliant, whom I will call Joe, invented a new way of processing mission critical dataflows out of pure frustration.

He was on yet another field assignment trying to solve the same problem for the m-teenth time, how to get large data files successfully transferred from one site to another using file transfer protocol (ftp). Large data files would take several hours to transmit, and because they were mission critical, compression was not allowed (at the time) because something very important could be lost in translation. The data links were notoriously unreliable, and if the signal was dropped, the file would have to restart transmitting from the beginning. Nothing was more frustrating than being over 90% complete in transferring the data, and then having the link drop out and having to start all over again.

So being totally fed up, Joe did some research and came across a technique called flow based processing. This could theoretically resolve the drawbacks to ftp, as the large data file would be converted to a flow, and as long as the handle to the flow file was transmitted and not lost, if there was a disruption in the transmission of the data, once the data link was reestablished, the transfer would pick up from where it left off.

If the handle was somehow lost, retransmitting the handle would take far less time than trying to resend the whole large file. And once the handle was effectively retransmitted, the data stream would still be able to continue from where it left off.

It was brilliant in its simplicity (though I am oversimplifying it for illustrative purposes). So Joe wrote the first instance of this service in his hotel room, implemented it at the site to send the data back to the main repository, and cut his technical visit short by several days.

His mission successes caught the attention of Senior Leaders at Headquarters, and he was recruited to see if this same technology could be employed to ingest all data into the huge data repositories in the basement of headquarters.

It was at this point in the story that Joe came to work with me in the Media Processing program. He was brought on as the Subject Matter Expert and Frameworks Architect, and I was his first Acting Program Manager. And together, we began breaking all kinds of conceptions about how to process mission critical data.

The first nut to crack, and why this tale is so aptly named, was whether Joe’s flow based processing implementation was going to stand up to the demands of basement dataflow processing. At the time, millions of files were coming into the building daily, and the number was increasing, as was the size of each file. Nearly all the basement systems were implemented in C code, which had highly regimented methods for controlling memory allocation and addresses for processing components so that things ran efficiently. Until they didn’t. These C-based systems were not able to respond to dynamic loads very well, so when a huge file or a bunch of files came in at once, the systems would choke, and a programmer would have to be called in to fix it. But the rigors of programming in this language required a lot of discipline and hard won expertise, and every time a programmer was called upon to resolve a critical mission processing issue, they felt like a hero!

The flow file processing implementation was written in Java, and at the time, Java wasn’t taken seriously by the industrial programming experts. It was a higher level language, which was easier to read by humans, but consumed more memory upon execution as a result. But data throttling could be handled more natively within Java itself, and didn’t require a specific component to be written in C code to handle that function. Without getting too technical, using the right Java implementation, multiple instances of processing components could spin up to handle surges in data, and could spin back down once the volumes returned to normal.

Joe would have lengthy debates with the project managers and technical leaders of the existing basement processing systems, how there was no way his cute little toy program was going to be able to replace their systems, unless you had thousands of copies of it running on who knows how many servers, because everybody knows how inefficient Java code is!

Then he would just show them. One of beautiful aspects of his implementation is that the dashboard for controlling all the instances of the system looked like a wiring diagram.

Some of you reading this may be old enough to remember the first word processors, where you weren’t exactly sure how your document was going to format and look until you hit print. Then Microsoft Word came along and what you saw on your screen was what ended up on the paper.

The wiring diagram that system engineers were familiar with and were required to produce as part of any Government project’s documentation was literally the actual executable system running in operations as projected on the computer screen. Even more powerful, new processes could be dragged and dropped onto the palette, or copied and pasted from other parts of the wiring diagram, and those new processes would instantly start operating on live data the moment they were connected to a source. The implementation of flow file processing was able to apply the concept of “WYSIWYG - What You See Is What You Get” to dataflow instead of document/word processing.

And one of Joe’s favorite tricks during demonstrations was to say, “See here? This is all the data coming in from this very critical data source. Now let me just click here and pause it for a second, and you can see that nearly instantaneously we have several thousand megabytes of data backing up…”

This would cause a few of the seniors in the audience to freak out, because in the past when something like that occurred, it meant calling in a programmer to fix the problem ASAP, before “bad things happen.”

And then Joe would say, “So let me unpause it, and you can see that instantaneously several versions of this process spun up, and look, the backlog has been worked off already before I could even finish my sentence. Any questions?”

The point was that you didn’t need to be “The Developer” or “The Expert” to control the dataflow. Anybody could, with a modicum of training, and operational issues could be resolved without having to call in a highly paid support contractor.

As a joke once, my boss and I presented Joe with a copy of the book “The C Programming Language, by Kernighan and Ritchie”. Each C code warrior had a copy within arms reach on their desks. I probably still have mine in a box somewhere from my misadventures as a C programmer in the mid-90s. Joe flipped it open and remarked, “Hmm, copyright 1978. I’m copyright 1981 myself.” Making the point as humorously and as eloquently as only he could make it, and showed that he was in on the joke. C code’s time in the basement had come and gone.

Maybe this was the fear of those other Experts all along. If the system was robust enough to respond to data surges and other disaster scenarios that couldn’t be preconceived and captured in C code, how were they going to continue to be seen as heroes? And they sure as heck didn’t want to go back to school to learn to program in Java!

Joe was also aware that C was falling out of favor in the software development industry, and Java was on the rise. Java was starting to be taught in colleges, and an ecosystem was growing to support developers programming in that language. At some point, it was going to become exceedingly difficult to hire developers who knew how to program in C, and it was going to be much easier to hire developers that now had some level of proficiency with Java. It was going to be far easier to modernize basement processing with both C and Java experts in the employ of the Government rather than after they had all retired or found new jobs.

So in time, the new service was able to subsume all the requirements of those C-code based systems. Those projects were decommissioned, and those other project leaders either joined Joe’s team, or had to find other projects to lead.

Today, programmers and software developers hardly need to code at all! Since the release of ChatGPT, Stack Overflow, which is every software developer’s most trusted resource for coding examples and best practices, has seen its traffic reduced by over 50% in eight months, because ChatGPT and other AI tools can create error-free executable code from a well constructed prompt.

The creativity for a developer then is not in the source code (the product/solution), but in its extensibility and its utility (the outcomes). This is something Joe clearly understood, and he’s still a leader in this business space today. (He no longer works for the Government—they can’t afford him).

There will still be the need for people who know how things operate at the lowest level, to optimize performance that is highly tuned to the available hardware, to develop embedded systems, and to protect operating systems from malicious actors. Again, every pinnacle is important. The world needs Scholars and Experts. If the Expert pillar is the one you choose for your career path, do it with eyes wide open. And do not seek to ascend this pinnacle if you intend to hoard all the knowledge to yourself, or think that the knowledge you gain will last into perpetuity.

Dataflow is not a Waterfall

But that’s only one part of the story. Around the same time, a new way of developing and managing programs was taking hold. And in this case, I was one of the ones who was a little slow to catch on, until I became one of the converted and a staunch advocate. I am speaking of Agile Development.

I had been a project manager for three different Federal Government Agencies, and so I learned three different but very similar ways to develop capabilities. And then as a Program Manager with over a $10M annual budget, I was required to become certified by the Defense Acquisition University, and take continued training to retain my certification. And then in the late-oughts, there was a push for Acquisition Reform within the Department of Defense, which of course led to more mandatory training.

In each instance, the method that was taught for developing projects was the Waterfall Method. Projects and Programs were developed sequentially, following a regimented and document-driven process. Because everyone followed the same process, cost estimation was a straightforward, but not necessarily simple, process. Because Government moves slowly, is resistant to change, and highly risk adverse, budgets for large programs had to be decided upon years in advance. All of these things run counter to the idea of Agile development.

This highly regimented structured way of doing business was geared to producing capabilities for the Industrial Military Complex. Tanks, planes, ships, weapons, etc. And once you had a functioning master copy, you could create duplicates, following the development specifications. Hence the need for strict adherence to process and documentation. The first operational unit was extremely expensive to produce, but the risk of defects in subsequent production runs was greatly diminished.

But as you might surmise, this process is ill suited for developing software, especially for software systems that occupy only one place in a processing architecture that is constantly evolving.

So not only was Joe not a proponent of continuing to develop regimented software via the C programming language, he was not a proponent of the Waterfall method as well. And everyone who controlled the purse strings at the Agency were taught, titled, certified, and indoctrinated in the Ways of the Waterfall.

To further compare and contrast Agile Methodology with the Waterfall Method, for those not familiar, with a more concrete example, let’s imagine you are planning a vacation, a long, cross-country trip. With the Waterfall Method, before you even step out of the door, every single detail of the journey is plotted: your route, stops, activities, and meals. You've booked and pre-purchased everything, and there's no turning back. You withdrew all the money that you need for the trip in cash, so you left your checkbook and credit cards at home. If halfway through you discover a scenic detour or a local festival you'd love to check out, too bad. You've committed to the initial plan, and there's very little room for spontaneous changes.

With the Agile Method, imagine you plan just the first leg of your journey, your first incremental delivery. After reaching your first stop, you talk to locals, discover new attractions, and adjust your next leg based on your experiences and feedback. The journey is more flexible and responsive to your real-time discoveries and preferences. Sure, you have an overall idea of where you want to end up, but how you get there can evolve to ensure the best experience.

Digging deeper, here’s a breakdown comparing the two:

  • Planning vs. Adapting: - Waterfall is like the cross-country trip: plan everything upfront, then stick to the plan. - Agile is about short plans and frequent adjustments.
  • Feedback Loop: - With Waterfall, feedback comes at the end, almost like hearing about a missed festival once you've returned home from your trip. - In Agile, feedback is continuous, like chatting with locals throughout your journey to discover the best spots.
  • Flexibility: - Waterfall is rigid. Changes are hard and costly, similar to cancelling all your pre-booked trip arrangements. - Agile expects changes and is designed to accommodate them, embracing the idea that the journey might take surprising and delightful turns.
  • Risk: - Waterfall carries a risk. If something's wrong, you might only discover it at the end (like finding out a hotel you booked months ago is now closed). - Agile minimizes risk by constantly checking and adjusting, ensuring you're on the right path. are delivered immediately.

So, while Waterfall is a meticulously planned journey, Agile is more of an adventure with a flexible roadmap. Both can get you to your destination, but the experiences along the way and how you handle surprises differ greatly!

Additionally, Waterfall follows a sequential plan, everything happens in steps or phases, and every project traverses the same milestones, more or less. Agile happens in spirals, repeating the same test, design and delivery steps rapidly, over and over again. But with each delivery, value, outcomes and mission impact is incrementally (or exponentially) increased.

Show Me The Money!

As Joe’s first program manager, I had to try to secure funding to develop the new Data Processing Service (aka DPS, we love our three letter acronyms in the Government), built upon his creation henceforth known as NiagaraFiles, by playing it off as a Waterfall developed capability when it was being actually being developed via Agile methods.

I’m sure I drove him crazy, and I felt like I had a spilt personality during this period of my life.

As I already alluded to, Large Government projects are not incrementally funded. Though there is an budget process, funding is actually programmed as part of a Five Year Development Program (FYDP, pronounced fie-dip), with major program adjustments/rebalancing allowed every two years. The whole process is aligned to the regimented Waterfall method of developing capabilities and letting Government contracts.

Joe had developed the initial prototype, but he needed a team of developers, testers and operators to bring the DPS to its initial and eventual full operating capability. He needed me to get the money to hire the team of contractors to continue iteratively developing NiagaraFiles.

We couldn’t answer the question of how much the DPS was going to cost five years out, because to use the above analogy, we weren’t sure that our trip to Vegas wasn’t going to require a quick detour to the moon. In their eyes, there was too much risk of requirements creep, and no way to measure the project’s percentage of completion in a way that they were comfortable with. And they were right to feel this way, to a point. There were too many projects within the Government that got started and then were confronted with “unfunded requirements”, having to “tin cup” for additional funds in order to meet their original commitments. I’m certain to them we looked like one of those projects that would never end and would always be asking for more funding.

So we were often at an impasse in terms of how much funding should be programmed in the budget in the outyears. The powers that be wouldn’t approve the initial increment of funding until we could say without equivocation how much funding we would need for the entire five year budget build and product lifecycle.

What’s a couple of Mavericks to do? Believe it or not, there is something akin to the concept of angel investors, even in the Government. I cannot go into the details, but there are special programs funded by our partners that can be accessed for experiments and proofs of concept that are mutually beneficial. One such partner was a lot smaller than the US Government, a lot less bureaucratic, a lot less afraid of new technology and a lot less risk adverse. They saw the potential benefit of NiagaraFiles and decided to “invest” in developing the Initial Operating Capability (IOC) of the DPS. We let our first contract to augment the team of civilian developers working on the project, and we were off and running.

And once we had the IOC of the DPS, we could use the traditional Waterfall cost estimation techniques to determine the cost to get to the Full Operating Capability. The internal spirals we used to get there were irrelevant. By that point, we had enough operational data and maturity within the program that we could map the internal project activity spirals to the larger Government Acquisition and Budget process.

And eventually, years later, this style of software project development not only became acceptable within our Agency, it became the norm. But there were still more battles to fight.

Knowledge Comes Before The Test, Wisdom Comes After

The biggest criticism that the Waterfall traditionalists gave us when trying to get funding approval was that it was an undisciplined approach. Agile methodologists don’t produce a hard bound set of immutable documents and design specifications for building their software. Everything is managed from a product backlog that is constantly re-prioritized. There were different rules to produce the end result, and if everybody knew the rules and played by them, a satisfactory delivery was just as likely than with the traditional method, perhaps even more so.

And in the beginning, this was the thing that I had the biggest problem comprehending: the concept of testing first. With the Waterfall method, testing and integration happens after the development phase. How do you test when you haven’t built anything yet?

But then I realized that this is what true scientists do. They make a hypothesis and then they use the scientific method to test whether the hypothesis is true or false.

Agile developers create use cases for how all the actors will interact with the system, and then use agile methods to test the use cases to ensure that they have considered all the desirable and likely (not always desirable) outcomes. Use cases are developed from the point of view of the user, naturally. Only when a particular use case is well defined can the development team begin to write the code.

So there was still the attention to detail being provided by using Agile development methods as there was in the Waterfall model. But notice a very import distinction. With all the documentation in the Waterfall method the focus was on the design/product, what the developer needed to create. In this style of development, it is highly unlikely that the user will ever understand the design documents, much less actually read them. With the Agile method, the focus is on the user, and making sure that the development team gets the outcomes right, which allows for more flexibility in how the end product is created.

Once I understood that, I was totally on board with the Agile way of doing things.

So to the untrained eye, the Agile method appeared undisciplined because the focus was different, and the method of shared information was different. Agile actually requires more discipline, because any knowledge, information, priority or requirement changes must be immediately shared with the group in the appropriate holding environment. And that requires team members to know their role, and everybody else’s. Instead of a mountain of documents being “The Process”, the development team is “The Process”. And everyone on the team has to take a certain degree of ownership for it to work properly.

I’m reminded of a humorous story after the initial deployment and how one of the training videos for NiagaraFiles went viral on our internal network. The Lead Product Manager decided, instead of dressing in the normal business attire to give the basic product overview for the operations personnel, dressed as a superhero with a mask and a green cape and called himself “NiFiMan”. At the end of the video, he dropped an epic tagline, “NiagaraFiles: We’ve upped out standards, now up yours!”

Our Directorate chief was pleased and curious about how a simple training video was getting so much traction, until she saw the ending. She made us cut it out.

But we had already created the buzz for the product with our intended user base. We now had some raving fans who were eager to use NiagaraFiles over the cumbersome, multiple, non-integrated tools and dashboards they were using to perform their jobs at the time, and those raving fans had influence over the portfolio managers who controlled the purse strings. The combination of the rising demand for our solution and our method of mapping the Agile development process to the Waterfall Acquisition Process, with a clear focus on outcomes, caused the portfolio managers to eventually capitulate and give us a full five year budget.

The lesson here is whether you are trying to develop a robust software program or decide what the next step in your career should be, make sure you have properly and completely framed the problem you are trying to solve. What is your goal or intended outcome? And you can only discover this by being committed to testing first!

If you jump to the solution phase too soon without testing your hypothesis, you will struggle much more than you need to. You may need to develop a so-called operational prototype, an experiment, a trial employment or internship, to volunteer or work for free, so that you get some real data to analyze to help you understand the problem better. But be prepared to throw your earliest design ideas away. Know what your critical success criteria are, and if your first attempts don’t meet them, go to your next idea.

Once you know what outcomes you are trying to create, remain focused on them, and not on what solution you are building. Otherwise, you might end up chained to a solution and having to work to support a process, or even worse, have the process work against you. Don’t fall in love with your point-wise solution, but stay committed to your goals and to the journey. Many roads lead to Rome, and once you have your destination properly defined, agility will allow you the flexibility to alter your path or the vehicle you are driving to get you there.

Conway’s Law Revisited

The next conventional wisdom we dared to confront goes back to something I’ve already discussed in Chapter 4, Conway’s Law. The DPS, as you might guess from its name, enabled dataflow, essentially getting data transmitted from data sources to data repositories. Most people, especially our non-technical overseers who controlled the purse strings, saw this as a serial activity, again as a result of the training and certifications everyone was required to have. Their view was Data Source A sends data to Data Delivery System B which delivers data to Data Repository C where the data is analyzed by Data Analysis System D and the results of which are presented to the data consumer via Data Presentation System E.

What was often overlooked in this oversimplification is that data processing, or at least data translation, occurred within each of these monolithic architectural blocks. But it was so much easier for our overseers to confine the architectural model in this way, which locked us into being the provider for System B and focusing on the requirements of the systems directly upstream (A) and downstream (C) from us.

But where is the user in this picture? At the business end of System E. So any user requirements following this horizontal integration model were two degrees of separation removed from us.But this was the architecture that we were beholden to, at least for describing it conceptually. To make matters worse, all of the A systems had their own budgets. They were the data sources and they were easy to make the case for downtown to justify their existence and get budget increases. The other four systems were part of a Huge Government Program, and the Data Delivery piece was somehow deemed the least important, so it had the smallest piece of the budget for Systems B through E. But I hope it is clear to you that none of the other systems were worth a damn to mission if the data didn’t arrive in the right place, in the right format, and in a timely enough manner so that actionable intelligence could be produced by the System E users. And since System B through E were part of the same big program, any increase in the budget for any these four architectural monoliths came at the expense of the other three.

What the DPS was advocating for is wrapping data so that any time it had to traverse the architecture we knew where it came from, why we collected it in the first place, who wanted it, and where it needed to go. Again, I’m seriously oversimplifying things here. But this is essentially how the real internet works. The same data links can be used to transmit Netflix video streams, online gaming data, email, social media, money transfers, etc and everything winds up in the right place. To nerd out for just a brief moment, instead of ports and sockets, which is how Internet Protocol works, we were using topics and queues.

But to make it work, we needed communication and collaboration amongst all the leaders of the architectural components, and instead we were pitted against each other to fight over our respective pieces of the budgetary pie.

Again, what’s a couple of Mavericks to do? We worked behind the scenes to form alliances. Joe’s technical knowhow and vision were beyond reproach, and he was able to gain rapport with many of the data source programs, who were not subjected to the same kind of budgetary Hunger Games as the other components of the architecture were. What he didn’t have in terms of actual budget allocation, he had in terms of influence. He was able to influence how the next generation data source systems were being designed (highly compatible with NiagaraFiles) so that we at least could track where data was coming from and what analytics to perform before the data was staged for the analysts.

I focused on the customer representatives, showing them that as NiagaraFiles subsumed more requirements from legacy systems, the outcome wasn’t in dollars saved (which is what they wanted) but in increased reliability, increased agility, and increased throughput. In other words, the value being produced by our block in the architecture was increasing at a rate greater than any of the other blocks. Creating a new paradigm regarding how data was handled while in transit (or making a decision about whether it needed to be sent to basement processing in the first place) was our best hope to keep pace with the increasing volume, variety, and velocity of data coming from our data sources. We had a lot of spirited arguments, some of which got me in trouble with my management, but those arguments were critical in determining what the right success measures needed to be and how to align them to the better outcomes we were after. The old measures of “Pounds of SIGINT” did not serve us anymore, because not all SIGINT data had the same value.

The lesson here? Again, beware of Conway’s Law, especially when someone is trying to constrain you or impose an information flow that is counter to reality. But more importantly, influence is much more powerful than control. By using influence, the DPS didn’t have to rely on arguing for budgetary increases and creating adversaries within the Huge Government Program. We were able to make our job easier by partnering and influencing the work being done upstream and downstream, and leveraging our network of alliances to keep our work lean and agile and produce the greatest mission impact.

The End Of The Story?

NiagaraFiles ended up contributing greatly to mission successes that would have never been possible before, and led to the team winning the Deckert-Foster Award for SIGINT Engineering. Smart data decisions were able to be made closer to the point of collection and ingest, which made basement processing flows much more efficient, some even unnecessary!

When NiagaraFiles was at the peak of its popularity, when everyone was finally singing its praises and thinking it was the best thing since sliced bread, and Joe was getting all kinds of accolades, he would privately say to me, “I give NiagaraFiles about five years until something better comes along.” Because Joe was always focused on the outcomes, and he didn’t fall in love with his own product.

Joe also was quick to point out that even though he was given the title of Dataflow SME, he was not in fact, the EXPERT. There were people who knew dataflow and “basement processing” far better than he did, and he was focused on providing better, more reliable, and more extensible tools for the real experts to use, so they could focus their time on implementing new flows, rather than bolting on more complexity to a processing paradigm that couldn’t keep pace with the Digital Age.

Joe was also able to accept and implement an idea and vision that wasn’t originally his own. A couple people I worked with previously had taken software that was initially conceived within the Halls of Government and gone through the process of making it an Open Source product. The primary advantage of doing this from the Government perspective was that the Government would not have to be responsible for the operations and maintenance phase of the product, which is always the longest and most costly. Our Agency was not really in the business of maintaining semi-commercial products. However, for a business that can obtain the license to support an Open Source product (like RedHat Linux, or Adobe Flash), it can be a very lucrative product. Open Source products are often “free” to download, if you want to take on the burden of maintaining, updating, and patching the product yourself. But if you want someone to do that for you, well that’s where companies make their money!

Taking a product from the Government to Open Source is a long, document-heavy, and fairly painful process, but I suggested to Joe that he do this with NiagaraFiles. I don’t think he was very excited about the prospects of doing so, but I think he understood the long term benefits of pursuing this course of action. As it turns out, this is why he is no longer employed by the Government and why the Government can’t afford him anymore.

NiagaraFiles, now better known as Apache NiFi, has been adapted by several hundred companies worldwide with large enterprises, such as ExxonMobil, Ford, AT&T, Lenovo and CapitalOne.? NiagaraFiles was also at the heart of NSA’s Technology Transfer Program winning the 2019 National Award for Excellence from the Federal Laboratory Consortium.? NiFi continues to evolve and be a highly viable and successful product, long beyond the five year timeline Joe originally gave it.? As of this writing, Joe is working to create a new company to relaunch the product, making it even more compatible with the cloud-based enterprise solutions that are so prevalent in today’s IT marketplace.

Some final thoughts. Remember, though planning is useful, action, even inaccurate action, is better than planning. Kinetic energy is better than potential energy. To be in the game, you have to get in the game. Experiment. Get constant feedback on what works and what doesn’t. Trade your knowledge for wisdom, even if your knowledge was hard won. Knowledge is fleeting, wisdom lasts. Make adjustments. Be agile. Don’t fall in love with any particular solution. Stay focused on outcomes. Look for leverage and where you can ply your influence. Don’t be afraid to have conflicting opinions. Explore the possibility that both Expert views are currently wrong, because neither person doesn’t know what they don’t know. Don’t be a Hero. Be a Maverick.

To see what the future has in store for Apache NiFi, visit nifi.apache.org and datavolo.io

Sangeeta Garg

Mission to help 1000 people's learn online selling skills learn skill & mindset Entrepreneur FULL FILL DREAMS Digital learning to learning DM than more details ?????

10 个月

Very nice sir, I am learning a lot from you

回复
Corey Flowers

Co-Founder at Onyx Point, llc.

10 个月

Troy, let me apologize ahead of time but in honor of the book, our efforts as a team, and at the request of one Mr. Joe. I present the original NiFi Guy.....

  • 该图片无替代文字
Techie Trader

?? Africa's No 1 Sourcing Platform for top-notch IT hardware and software! ??????? One-stop platform for sourcing the best tech products|| Information Technology Company ||IT Hardware and Software || IT Solutions

10 个月

Insightful piece, my friend...

回复
Türker Tunal?

ERPNext Consultant and Active Developer | Some business managers struggle to lead their teams at times. So we provide software solutions to make the team efficient at all times. ERPNext Türkiye | Metabase.

10 个月

Nifi is definitely a different product. There are hundreds of integration and low-code tools but at the and NiFi always have more positive sides than others.

Mark Payne

Apache NiFi Co-creator | Helping others move all of their data where they want it as quickly, easily, and securely as possible.

10 个月

Great to read about some of the behind the scenes stuff that transpired to get things running. A lot of it I never realized happened. Not sure you realize this, Troy (probably not), but you gave me both my first job out of college and my first co-op/intern position while in college. That was about 20 years ago! And I’m still working on NiFi, getting data from where it is to everywhere it needs to be ?? Thanks for all that you’ve done both for me personally and for the program! ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了