The post The Value of an Architectural Approach appeared first on Topcoder.

]]>“Divide and Conquer”- a pretty simple motto. Tried and true, but not as easy as it seems.

In my view Topcoder has always been about divide and conquer. As in: break problems into smaller and more manageable “chunks”, parcel them out to architects, designers, and developers; and then put them back together into a fully functional and working application.

One could liken this to a well crafted watch where the gears, levers, and hands come together to give you what you expect from a timepiece: a time of day.

There is a hierarchy here at work, and like in any network, it is communication that makes the difference. What also makes the difference? Being able to see both the big picture and the little details. You know, see the forest and the trees kind of thing…

Breaking problems into smaller tasks and parcelling that out to a network of people is not a unique approach by any means, but being able to do it well… it is rather rare. Crowdsourcing such a hierarchy? Yes, even rarer… but what is even more rare than that is the ability to scale such an approach.

If you have, say, 10 people you can pull that off with a good manager.

100 people? Ā – Doable but harder.

1,000 people? Ā – Damn difficult

10,000 people? – Are you serious?

100,000? – Are you mad? Do you need a doctor?

1,000,000 – Welcome to Topcoder! What do you need and when do you need it?

Topcoder has been able to not only understand how to manage such a beehive of talent, but also learned how to mitigate many problems that emerge when scaling such a large and diverse group of talented people.

Let me tell you a story from the early days of Topcoder:

The year was 2006, competitions were roaring, and Topcoder was becoming a major force in the arena of reusable components. One of the foundational tools that we used then was a UML tool called “Poseidon UML.” We were all using the community edition as it was free.

We created hundreds of architectural diagrams that fueled the development cycle. And we did this in record time. A typical competition was about 6 days long and we created architectures from scratch to finish in such a time.

Then came an announcement from Poseidon. We could not use their community tool anymore. We had hundreds of designs that would not be viewable anymore and we had great momentum that we did not want to lose.

So, what do you do? You do the unthinkable. I still remember when we got the announcement from Topcoder. We were going to build our own UML tool from scratch.

Did we do it? Yes we did. In record time. TCUML tool became a reality.

It took 29 designers (who created the component architectures) 42 developers (who developed the modules/components) 5 assemblers (who put the pieces together into a functional tool) and more than 50 reviewers to create this tool.

Our project manager and main architect drove the network of people forward, all gears worked together and all the pistons pushed. We did not miss a beat. We delivered. Topcoder did what Topcoder does so well.

We learned a lot in those days, and mostly we learned that divide and conquer, when properly managed, works; and it works well. I personally honed my skills as a software Engineer and learned a lot of very valuable lessons. We all did.

**Lessons learned and lessons shared.**

My brother (argolite) and I (AleaActaEst) started a kickstarter campaign to create a series of courses that will teach **Software Architecture skills** to all up-and-coming and aspiring developers, designers, and architects.

We are envisioning a bundle of about 8-10 progressive courses. Courses that will help you learn the same lessons we learned about Software Engineering including divide and conquer, but without having to go through many of the pitfalls we did.

We want to identify and deliver the best Software Engineering and Architectural principles in one comprehensive place. Please have a look and if you want such a course bundle to come to life then **give us a lending hand by supporting** the campaign. You will get all the courses as well as a session with us.

We wanted to capture the best of our Topcoder competitions and work experience, as well as delve into some of the cutting edge topics such as **AI**, **Agent Technology**, **Blockchain**, **AR, VR**… all from the vantage point of **well crafted architectures**.

The emphasis is on a good design **visualization **and **overall documentation** with mastering design and architectural patterns. (We use UML but not exclusively, we have a few other paradigms as well) but, it is definitely foremost about understanding the building blocks of architectures and avoiding the pitfalls of complexities. (i.e. utilizing divide and conquer and the KISS principle)

We also create clone applications such as Uber, WhatsApp, and Instagram, among others, to help students with being able to do things from scratch by starting with a design and learning to progressively design, document, code and test, but **always with a good blueprint** in hand.

Thank you for your support!

The post The Value of an Architectural Approach appeared first on Topcoder.

]]>The post Single Round Match 744 Editorials appeared first on Topcoder.

]]>Define k = (**d**–**a**) div 3. We set b = **a** + k and c = **a** + 2*k. From the construction of b and c, we have that [**a**,b) and [b,c) have length k each. Now, the only part left is to verify that interval [c,**d**) has length at least k. If it wouldnāt be the case, then the sum of lengths of [**a**, b), [b, c) and [c, **d**) would be less than 3 * k. But this is a contradiction with the fact that **d**–**a **>= 3k.

public int[] split(int a, int d) { int minLength = (d - a) / 3; return new int[] { a + minLength, a + 2 * minLength }; }

We split the explanation in two parts: we explain how to verify the second condition, and then we discuss how to implement the first condition efficiently.

**The second condition of being magic number**:

We assume that v is a square of an integer, and now we want to check whether it is magic or not. We can achieve that by the following simple procedure, that directly implements the statement of the condition:

private boolean isMagic(long v) { ArrayList < Long > digs = new ArrayList < Long > (); while (v > 0) { digs.add(v % 10); v /= 10; } boolean OK = true; boolean odd = true; for (int i = digs.size() - 1; i > 0; i--) { if (odd && digs.get(i) >= digs.get(i - 1)) OK = false; if (!odd && digs.get(i) <= digs.get(i - 1)) OK = false; odd ^= true; } return OK; }

**Implementing the first condition efficiently for all the numbers in [A,B]**:

Our next goal is to run the above procedure isMagic on all the squares between **A** and **B**. Since **A** and **B** can be as big as 10^10, it would be too slow to test each of the number from that interval whether it is a square or not. However, if for each Y that āmakes senseā we test whether Y^2 is between** A** and **B**, then we obtain significantly more efficient solution. Namely, if v is between **A** and **B** and v=Y*Y, then Y is at most 10^5. So, we iterate over all such values of Y, and whenever Y*Y is between **A** and **B** we invoke the procedure above. The code in Java follows:

public int count(long A, long B) { int ret = 0; for (long v = 1; v * v <= B; v++) if (v * v >= A && isMagic(v * v)) ret++; return ret; }

Let cnt(r,c) be a procedure that calculates the sum of rectangle [0,r] x [0,c]. Then, the final output is cnt(**r2**+1,**c2**+1) – cnt(**r1**,**c2**+1) – cnt(**r2**+1,**c1**) + cnt(**r1**,**c1**)

**Calculating the sum of rectangle [0,r] x [0,c]**:

Without loss of generality, assume that r < c (as the grid is symmetric with respect to r=c). Next, we divide the rectangle [0, r] x [0, c] into two parts: square in the corner r x r and vertical bars to the right of this square.

To handle the square, we observe that they have very regular form. This is how the first 6×6 part of infinite grid looks like:

2 2 2 2 2 2

1 1 1 1 1 2

0 0 0 0 1 2

2 2 2 0 1 2

1 1 2 0 1 2

0 1 2 0 1 2

On the other hand, each bar to the right of the square consists of the same number. And those numbers are a subset of the infinite sequences 0, 1, 2 (repeat).

So, we separately handle the square and separately handle the bars. Formulas for each of those cases can be derived relatively easily. One such implementation is provided below.

Code in C++ (courtesy of majk):

ll cnt(int r, int c) { if (r > c) swap(r, c); ll x = r - r % 3; ll y = c - x; y -= y % 3; return 4 * x / 3 + x * x + (c % 3 == 2) * r + y * r + (x + 1) * (r % 3 == 2); } ll sum(int r1, int r2, int c1, int c2) { return cnt(r2 + 1, c2 + 1) - cnt(r1, c2 + 1) - cnt(r2 + 1, c1) + cnt(r1, c1); }

The main point here is to observe that flipping rows and columns is almost independent. Namely, consider an operation applied on **M**(i, j). The problem statement says that we flip the i-th row by not changing **M**(i, j), and then we flip the j-th column by not changing **M**(i, j). But exactly the same outcome would be achieved by flipping the *entire* row i (including **M**(i, j)), and then flipping the entire column j. This observations allows us to perform row and column flips almost separately.

The second observation that we make is that applying the entire-row flipping operation on a row for even number of times does not change that row. Also, flipping odd many times the entire row results in the same state of the row as by applying the operation only once. The same holds for entire-column operations. Hence, what matters is the parity of the number of times an operations is applied on a row/column.

Finally, observe that the order in which the entire-row and entire-column operations are performed does not matter.

So, we apply entire-row operations first. Consider separately the cases in which the first row is flipped zero times (i.e., an even number of times) and flipped once (i.e., an odd number of times). After flipping the entire first row for b many times (where b equals 0 or 1), and before applying entire-column operations, perform entire-row operations on each of the other rows so that they are *the same* as the first row. If it is not possible to achieve, than our choice of flipping the first row b many times does not lead to all-ones matrix.

Once all the rows are the same, we apply the column operations. Applying a column operation is now trivial — the j-column should be entirely flipped iff M(0, j) equals ā0ā.

At the end, we check whether the parity of the applied entire-row and entire-column operations is the same. If it is the same, the output is āpossibleā, and otherwise our choice of b does not lead to all-ones matrix.

If no choice of b leads to all-ones matrix, output āimpossibleā.

Code in Java:

private String[] M; private int n, m; private void flipRow(int i) { char[] newRow = new char[m]; for (int j = 0; j < m; j++) if (M[i].charAt(j) == '0') newRow[j] = '1'; else newRow[j] = '0'; M[i] = new String(newRow); } public String isPossible(String[] MM) { M = MM; n = M.length; m = M[0].length(); String[] origM = new String[n]; for (int i = 0; i < n; i++) origM[i] = M[i]; for (int changes = 0; changes < 2; changes++) { int cntRow = changes; for (int i = 1; i < n; i++) { if (M[0].charAt(0) == M[i].charAt(0)) continue; cntRow++; flipRow(i); } int cntCol = 0; boolean OK = true; for (int j = 0; j < m; j++) { if (M[0].charAt(j) == '0') cntCol++; for (int i = 1; i < n; i++) if (M[0].charAt(j) != M[i].charAt(j)) OK = false; } cntRow %= 2; cntCol %= 2; if (OK && cntRow == cntCol) return "possible"; for (int i = 0; i < n; i++) M[i] = origM[i]; flipRow(0); } return "impossible"; }

There are two natural approaches for this problem — satisfy demands in top-down fashion (i.e., starting from the root and moving towards the leaves), and satisfy demands in bottom-up fashion (i.e., starting from the leaves and moving towards the root). The official solution takes the bottom-up approach, that we describe next.

Initially, we set v[i] = 0 for each vertex i, and update v in n steps. After the k-th step we maintain the following invariant: if v[i] = 0 for each i < **n** – k, then the remaining entries of v are set so to minimize the cost and satisfy demands of the vertices i >= **n** – k.

Assume that we have executed k steps, and now we are executing the (k+1)-step. Let i = **n** – (k + 1).

If i is a leaf, then we set v[i] = **d**[i]. (This value might be updated later.)

If i is a non-leaf, in this step we will eventually have to set v[i] >= **d**[i], as the only way to satisfy the demand of i is to let v[i] being at least **d**[i]. Also, the descendants of i have already satisfied their demands even if v[i] equals 0. So, when we increase v[i], it means that some of v[j], where j is a descendant of i, could potentially be reduced. But v[j] of which descendants?

We repeat the following:

- Let S_i be the set of all descendants of i such that for each descendant w all the values of the vertices on the path between w and i (excluding w and i) is zero.
- If v[i] >=
**d**[i] and sum_{w in S_i}**c**[w] <**c**[i]- break

- Update v[w] = v[w] – 1 for each w in S_i
- Update v[i] = v[i] + 1.

Clearly, this process does maintain the invariant that the demands of all the vertices [i+1, n-1] are satisfied. Furthermore, the demand of vertex i is satisfied as well, as the updates will continue as long as v[i] < **d**[i]. But, is the choice of S_i the best one?

Namely, consider the following example. Vertex i has a direct child 2, and 2 has two direct children 0 and 1. Assume that each v[0], v[1] and v[2] is at least 1. When we increase v[i] by 1, instead of decreasing v[2] by 1 we could have as well decreased v[0] and v[1] by 1 without violating the demands. But, would decreasing v[0] and v[1] reduce the cost even further than decreasing only v[2]? If it would reduce the cost even further, then it would mean that c[0]+c[1] > c[2]. But now this violates our invariant that after the step when we processed vertex 2 the values were set optimally assuming that the unprocessed vertices have value 0. Namely, if c[0]+c[1]>c[2], an optimal solution should *not* keep both v[0] and v[1] positive as it pays off to increase v[2] by 1 and decrease each v[0] and v[1] by 1.

It is easy to generalize this argument in the following way. If we assume that in Step 1 it is better to choose a set of descendants {x_1, ā¦, x_r} of w instead of w in S_i (and all the vertices x have positive v[x]), than it would imply that c[x_1]+…+c[x_r] > c[w]. But, again as before, this would contradict the invariant that when processing w the values in v are set optimally. (Notice that after a vertex i is processed and its value v[i] is set, in the further steps it never increases.)

To implement these steps efficiently, the official solution maintains a heap for each S_i together with the total cost of all the vertices that are in S_i. The heap for S_i is ordered by the current values of the vertices. In that way, after we update v[i] and need to reduce v[w] of w in S_i, it is easy to find the vertex in S_i that has the smallest value. Once v[w] get reduced to zero due to the updates, then we merge S_i and S_w. To do that efficiently, we should make sure to merge smaller to the larger heap.

The full implementation in Java is provided below.

private class Pair implements Comparable { public long first; public int second; Pair(long first, int second) { this.first = first; this.second = second; } public int compareTo(Pair pair) { if (this.first < pair.first) return -1; else if (this.first > pair.first) return 1; else { if (this.second < pair.second) return -1; else if (this.second > pair.second) return 1; return 0; } } } private ArrayList children[]; private PriorityQueue heap[]; private long[] sumOfCosts; private long[] valueToSubtract; private long[] value; private long totalCost; private long[] d, c, p; private void mergeFirstToSecondHeap(int f, int s) { while (heap[f].size() > 0) { Pair p = heap[f].poll(); p.first = p.first - valueToSubtract[f] + valueToSubtract[s]; heap[s].add(p); } } public long minimumCost(int n, int[] op, int[] od, int[] oc, int[] params){ d = new long[n]; c = new long[n]; p = new long[n]; for (int i = 0; i < od.length; i++) { d[i] = od[i]; c[i] = oc[i]; if (i < op.length) p[i] = op[i]; } int t = od.length; for (int i = t - 1; i <= n - 2; i++) p[i] = (params[0] * p[i-1] + params[1]) % (i+1); for (int i = t; i <= n - 1; i++) d[i] = 1 + ((params[2] * d[i-1] + params[3]) % params[4]); for (int i = t; i <= n - 1; i++) c[i] = 1 + ((params[5] * c[i-1] + params[6]) % params[7]); children = new ArrayList[n]; for (int i = 0; i < n; i++) children[i] = new ArrayList(); for (int i = 0; i < n - 1; i++) children[(int)p[i]].add(i + 1); totalCost = 0; heap = new PriorityQueue[n]; sumOfCosts = new long[n]; valueToSubtract = new long[n]; value = new long[n]; for (int v = n - 1; v > -1; v--) { sumOfCosts[v] = 0; value[v] = 0; heap[v] = new PriorityQueue(); for (Integer child : children[v]) { sumOfCosts[v] += c[child]; heap[v].add(new Pair(value[child], child)); } long increaseBy; while (true) { if (heap[v].size() == 0) { if (d[v] <= value[v]) break; increaseBy = d[v] - value[v]; } else if (c[v] < sumOfCosts[v]) increaseBy = heap[v].peek().first - valueToSubtract[v]; else if (value[v] < d[v]) increaseBy = Math.min(d[v] - value[v], heap[v].peek().first - valueToSubtract[v]); else break; totalCost += increaseBy * c[v]; totalCost -= increaseBy * sumOfCosts[v]; value[v] += increaseBy; valueToSubtract[v] += increaseBy; ArrayList toMerge = new ArrayList(); while (heap[v].size() > 0 && heap[v].peek().first == valueToSubtract[v]) { Pair p = heap[v].poll(); toMerge.add(p); sumOfCosts[v] -= c[p.second]; } // Merge the heaps. for (Pair p : toMerge) { int u = p.second; sumOfCosts[v] += sumOfCosts[u]; if (heap[u].size() <= heap[v].size()) mergeFirstToSecondHeap(u, v); else { mergeFirstToSecondHeap(v, u); heap[v] = heap[u]; valueToSubtract[v] = valueToSubtract[u]; } } } } return totalCost; }

The post Single Round Match 744 Editorials appeared first on Topcoder.

]]>The post A Year in Review: 10 of Topcoderās Best Stories From 2018 appeared first on Topcoder.

]]>During the National Retail Federationās āBig Showā in New York City, Topcoder reimagined the online retail experience for Macyās with a live, crowdsourced design thinking exploration. In under 48 hours, we delivered 18 unique design concepts and over 180 unique screens for a new macys.com.

The Department of Veterans Affairs, National Cemetery Administration worked with Topcoder to change how we remember Veterans by immortalizing their memory in an online memorial platform called the Veterans Legacy Memorial. This online memorial will create a lasting legacy for veterans and their families and is expected to be released by Memorial Day, 2019.

This year, we partnered with Verblio, the leader in crowdsourced content. Content is one of the most difficult, niche aspects of marketing, and Verblio makes it easy for agencies and enterprises alike to get high-quality, cost-effective content from a talented crowd of U.S.-based writers.

We worked with our partner, GigScaler ā a business that connects clients to crowdsourcing talent resources and platforms ā to help their client, The Asor Collective ā a fashion industry disruptor that helps entrepreneurs in the fashion space grow and strengthen their brands, products, and services ā get a unique, high-quality logo, letterhead design, a branding promo pack, and comprehensive branding guidelines.

Topcoder arrived in the Microsoft Azure Marketplace back in September ā beginning with our Ask the Azure Expert solution. Need some expert advice as to which tool or solution to implement? Want best practices to integrate a specific Azure-based solution? Looking for technical expertise to help solve a problem? You can now get Azure-specific help for free from Topcoder.

ConsenSys Diligence, a branch of ConsenSys, is helping people do more (and more securely) in the Ethereum ecosystem. They accelerated their go-to-market strategy with crowdsourced development from Topcoder. We spoke with Bernhard Mueller, Security Engineer at ConsenSys, about the benefits of getting development work done with crowdsourcing and the added technical expertise Topcoder provides.

Quickpath Analytics helps organizations make intelligent business decisions. And in order to make their user experience intuitive and interactive to data engineers and data scientists in a quick, affordable way, Quickpath came to Topcoder to explore the speed, optionality, and quality provided by crowdsourcing. We spoke with Quickpath Co-Founder Alex Fly about his experience with our platform and their vision for Quickpathās future.

In 2018, IBM and Topcoder partnered to make chatbots accessible for businesses of all sizes. IBM offers technologies that enhance customer experiences, and Topcoder has certified developers available on demand. Check out a webinar with IBM and Topcoder and find out how to get a ready-to-deploy chatbot in weeks rather than months.

Together, Spigit, the worldās leading provider of full life cycle idea management software, and Topcoder can help organizations make digital transformation not just possible, but also painless. See how Spigit and Topcoder can take you from *aha!* moment to finished product.

This fall, we teamed up with Open Assembly, a premier strategy and innovation consultancy, for a report on the state of crowdsourcing. Download the free report and see how whole cities, businesses, and universities save money and time, experiment more, and go faster with crowdsourcing.

Surprise! Last but certainly not least, hear from Topcoder member and copilot Paulo Vitor Magacho da Silva on our award-winning NASA ISS Food Intake Tracker app ā and particularly his experience on the project from start to finish.

*Happy holidays from Topcoder! Inspired to get started on your next project with the worldās premier on-demand talent network?*

The post A Year in Review: 10 of Topcoderās Best Stories From 2018 appeared first on Topcoder.

]]>The post Celebrating RUX Challenges with the big 5-0! appeared first on Topcoder.

]]>The first RUX launched was June 22, 2016 and it was about a drone visualization. You can check it out here. The RUX challenge format arrived after previous experiments running the LUX (Live User Experience) challenges for over 2 years, which was very successful. The goal for the RUX was to create consistent timelines (72 hours), consistent prizes and for it to run anytime (it didn’t need to be live at an event). There is also a leaderboard that showcases and highlights the variety of “RUX Champions” since the first one.

Below are a sample of designs from the RUXes:

**RUX 8: ****RUX – 72HR Insurance Call Center Application Design Concepts Challenge**

Design of 1st place winner: **chekspir**

**RUX 21: ****RUX – Kubik Insurance Quote and Bind Solution Design Concepts Challenge**

Design of 1st place winner: **abedavera**

**RUX 22: ****RUX – 72HR Employee Communication Mobile App Design Challenge**

Design of 1st place winner: **iamtong**

**RUX 34: ****RUX – 72HR Retail Store Responsive Web Design Challenge**

Design of 1st place winner: **vlack**

**RUX 44: ****RUX – NWL Knowledge Sharing Design Concepts Challenge**

Design of 1st place winner: **yoki**

From that moment until now, we have run 49 RUXes with incredible design results! Thursday, Dec 13, we are going to launch RUX50 and we invite everyone to be part of it. Big prizes and interesting UX problems are waiting for everyone. Anyone can win! The stats below prove it!

Did you know that in all the history of RUXes:

- We have had members from from 13 countries winning a placement? Cape Verde, Chile, China, Hungary, India, Mexico, Philippines, Romania, Russia, South Africa, Thailand, Vietnam
- We have 55 members who won placements (prizes)?
- The members with the most placement wins are iamtong with 31 wins, followed by Ravijune with 17 wins and yoki with 14 wins?
- The member with the most 1st placement wins is iamtong with 10 wins, followed by Ravijune with 5 wins and yoki with 4 wins?
- We have 17 unique members who have won only 1 time first placement in a RUX?
- iamtong has placed 31 times in RUXes and in 10 of them he won 1st placement?
- iaminfinite has placed 13 times in RUXes, but he is still looking to capture a 1st place? However, he places in 4th position for total prizes amount.
- The total amount of placement prizes paid out for RUX challenges are $267,250?
- Top 10 members by the number of placements are: iamtong, Ravijune, yoki, iaminfinite, eriantoongko, abedavera, ssp555, Ariefk, universo?
- The members who won a first placement are: iamtong (10), Ravijune (5). Yoki (4), fairy_ley(3), abedavera (3), Riopurba (2), Ariefk (2). Everyone that follows has 1 win: vlack, universo, ujazz, ToxicPixel, tototpc, Tewibowo, nicokontes, lemonnine, kelvinwebdesign, joni7sunny, eriantoongko, chekspir, Brianbinar, basuki, ArteVisual, ssp555 and PereViki.

If youāre interested in jumping into RUX or LUX challenges, definitely check out this!

The post Celebrating RUX Challenges with the big 5-0! appeared first on Topcoder.

]]>The post Takeaways From the Topcoder Team at This Yearās AWS re:Invent appeared first on Topcoder.

]]>Recently, Topcoder sent a team of people for a great week in Vegas to find out what is going on at AWS. Joining over 50K builders (what a great term for software professionals), Topcoder was fully engaged at AWS re:Invent ā learning about all things security, machine learning, serverless computing, infrastructure as code, and even self-driving cars using reinforcement learning. Okay, we *may* have spent too much time on the last oneā¦ In any case, hereās a rundown on our key takeaways from the event.

Every time I attend an AWS event or read about any of their product releases, I can’t help but think about parallels in crowdsourcing. One of the things I took away from Andy Jassy’s keynote (it was ~3 hours long; definitely one of the more knowledgeable CEOs I have ever heard spoke) was his statement that AWS clients don’t like to get locked into costs and are always looking for flexibility and on-demand access to resources.

In fact, AWS is proactively making on-demand scaling easier and easier. Look at some of the releases this week: S3 now automatically moving assets from one level to another to save money, DynamoDB rolling out read/write on-demand capacity (don’t worry upfront about buying capacity), and even the new AWS Managed Blockchain service lets you create and manage scalable blockchain networks. It is also amazing to me that year over year, the same unit of work costs less than it did the year before. This reminds of what we are doing at Topcoder, specifically with providing flexible, on-demand access to talent to produce quality work. Companies should be thinking about producing the best output with the most optimized resources.

Another release was the ability to add GPU resources to any EC2 instance, allowing the instance to quickly work on complex machine learning calculations. This isn’t much different from what Topcoder is doing ā where you can design and build an application with on-demand software resources and quickly add data scientists or mathematicians at a momentās notice. It is amazing to me now how quickly a company can build a complete project with no servers and no developers by using on-demand resources through Topcoder and AWS.

This was my first AWS re:Invent and there was a lot of content, so Iām still getting my head around what Iāve learned but here are some key concepts. AWS introduced Control Tower, which āautomates the setup of a baseline environment, or landing zone, that is a secure, well-architected multi-account AWS environment.ā Automating and securing the setup and deployment of landing zones will enable the secure-by-default position companies are expecting. AWS also introduced Security Hub. Amazonās security ecosystem has continued to grow with the introduction of Macie and GuardDuty at last yearās re:Invent. Security Hub provides visibility into these ā as does Inspector. The consolidated view for security is a welcome addition to the AWS security portfolio.

Though these additions are compelling, one thematic change that will further enable the democratization of work is the realization of infrastructure as code (IaC). Customers constantly ask āHow can you build for *my* environmentā? Technologies like CloudFormation have been around for more than five years, but as the adoption of cloud-based services has increased, the ability to replicate infrastructure is quickly becoming a reality. Soon we will be creating sandboxes of customersā environments with containers and IaC will now ā with secure-by-default constructs ā enable a developer to contribute anytime, anywhere.

More chicken wings than Iāve ever seen in one place, pub crawls, human labyrinths, Skrillex, Mija, air hockey, robots, drones, āKiller Queen,ā swag raining down on you from all directionsā¦ Of course I ignored all of that and got straight to business!

Itās been a few years since Iāve been to a big industry convention like AWS re:Invent. It was awesome to have the chance to attend. For me, the greatest joy in a conference like this is the unexpected. I signed up for and attended several sessions on some great topics, including devops scaling, AWS Mechanical Turk, event-driven architectures, etc.

However, it was the unplanned sessions I wandered into that really made the difference. Having nothing specific on the agenda, I joined John for an IAM policy session. I didnāt know what to expect, but I learned a *ton* in that session. (I was completely oblivious to the newer guardrails functionality, along with several brand-new announcements.) I got an unexpected and thrilling introduction to machine learning and SageMaker due to a last-minute decision to attend a DeepRacer workshop.

I had some great conversations with our existing vendors and discovered some exciting new ones in the expo halls. There were also quite a few lines to wait in. However, waiting in lines at a convention like this is an opportunity to meet new people and hear about their experiences. I met folks from all over the place ā all passionate about cloud services. In short, there was a really great sense of community. At Topcoder, we crowdsource software development, we work every day with developers from all over the world. Community is what weāre all about.

I honestly hate traveling. Itās one of my least favorite things to do. Living up to my handle of lazybaer, I prefer to hunker down at home. However, when Mess (Dave) asked me if I wanted to go to Las Vegas for AWS re:Invent I absolutely could not pass up the opportunity. re:Invent was impressive in breadth and scope. Aside from being spread across seven different casinos, there were over 3,500 sessions to choose from ā on topics ranging from auto scaling elastic compute resources to the latest and greatest in serverless tech to artificial intelligence.

A developer at heart, the announcements and sessions that most piqued my interests were around the new advances in AWSās serverless platform known as Lambda. Lambda had a few major announcements:

- The first was support for custom runtimes on Lambda. This allows developers to utilize any language runtime underneath the hood with their Lambdas. In addition, AWS has released Lambda Layers, which allow developers to share code and data artifacts across Lambda functions. These two capabilities should be a boon for developers as they promote asset reuse and extensibility. You can read more about these here.
- Another related announcement is the open-sourcing of the Firecracker microVM system ā the underlying engine that Lambda runs upon. This will open the door for others to adopt Lambda-like development on their own infrastructure and likely create an interesting competitive market for serverless hosting.

Besides all the cool serverless announcements, another area that caught my eye was in the area of augmented reality (AR) and virtual reality (VR). AWSās Sumerian has been out for a little while now, but seeing it in use firsthand was very impressive. On the expo floor we met with Kyle Roche, the GM for Sumerian, at the developer lounge. There we watched a designer wearing an Oculus headset create an amazing 3D scene in real time on the screen right in front of us. Kyle explained to us how rapidly developers can create and deploy AR and VR experiences that will run not just on high-end headsets, but also within any webGL-capable browser. We also checked out the Sumerian-based Into the Spider-Verse AR app that places Spider-Man live into your environment.

Last but certainly not least: artificial intelligence. I was very impressed with the Full Court Press of AI offerings being presented. From the training sets now being generally available to all to the GPU on-demand capabilities announced to the DeepRacer, itās clear that AWS wants to commoditize AI, and bring it to the masses as a service and enable developers to integrate AI into everything.

We spent a significant amount of time checking out the DeepRacer and its reinforcement learning. DeepRacer is basically a DeepLens camera attached to an RC car. The system used a ārewardā fitness function that we developed on our own, and then gave feedback to a DeepRacer simulator. This function ārewardedā the neural net for making good driving decisions as we attempted to get the racer around the track. Itās surprisingly easy to get started with this and itās all built on current AWS offerings like SageMaker. Iād really like to see how we could bring this, maybe in conjunction with Sumerian, to the Topcoder Open next year.

So what does this all mean for Topcoder? As AWS continues to innovate, drive down costs, and enable developers to do more with less, weāre going to do more exciting and cutting-edge work with the Topcoder Community. We foresee Topcoder and our amazing community being a perfect pairing for all these new AWS offerings ā and we canāt wait to see we can cook up together.

The post Takeaways From the Topcoder Team at This Yearās AWS re:Invent appeared first on Topcoder.

]]>The post The Top 10 Community Blog Posts from 2018 appeared first on Topcoder.

]]>They come up with the plan, the posts, the scheduling, etc. We wouldnāt have a great blog without our community. Special thanks to DaraK and quesks for all the behind the scenes work they do to make the blog tick.

I have loved reading all the blog posts and now as the end of the year is upon us, I wanted to highlight some of my favorite reads throughout the year. You can always read all community blog posts here. I encourage you to reach out to me directly if you ever want to contribute to the Topcoder blog. If you have a story to tell, a tip to share, or have insight in the latest trends in the industry, we want you to share.

**Starting Off With Algorithm To Becoming Design Champion, Meet kharm!**

I am from Tangerang, Indonesia. Iām 28 years old and I studied Computer Science from Binus University. And I donāt belong to Team Apple, Team Dog, Team Nike, or Team Mercedes… read moreļ»æ

**Using Meaningful Typography in Your Design**

Typography is the art of arranging type with the aim to make your design look aesthetically appealing and easy to read. Typography itself is an art and it represents 90% of the design, it helps support your brand and convey your messageā¦ read more

**Topcoder Development Challenges ā Decoding A Reviewerās Mindset**

While all Topcoder tracks ā design, development and algorithms have a fierce competitor pool and cut-throat competition, the number of development ācodeā challenges launched on a weekly basis outweighs the others by sheer volume… read more.

**Mid-Year Topcoder Community Update with a Look Ahead to TCO19**

February seems like a long time ago, as weāre almost in August. On Valentineās Day, per tradition, we shared our 2018 State of the Community… read more.

**The 5 Best Resources for Mobile Developers**

Apps are big, screens are small and life as we know it is on its head again. Mobile apps play a vital role in a world thatās increasingly social and open, and have changed from the focus from whatās on the Web, to the apps on our mobile devices. Mobile apps are no longer an option, they are imperativeā¦ read more.

**From Idea to MVP – Topcoder Mobile Development Process Demystified**

Iāve had the good fortune of copiloting some amazing Topcoder projects over the last 5 years. Although itās been a varied mix of diverse technologies, Iāve absolutely loved working on mobile projects building some fantastic Android and iOS apps for several customers… read more.

**Road to TCO18 Finals: Stories Worth Knowing From 2018 Regional Events**

The TCO18 Finals are less than a couple of months away. However, we have some great stories to share from our regionals this year. Regionals are what pave the way for the Finals. So without further ado, here are some great stories from the 2018 Regionals starting with… read more.

**A Problem Writing Journey**

I started problem writing in the summer of 2014. It was the summer before my senior year at university, and I had an internship in Palo Alto. I had trouble adjusting to work since it was my first experience in industry… read more.

**Highlighting the Women Behind Competitive Coding and Design**

Every year, Topcoder holds the Topcoder Open (TCO), our biggest live in-person coding and design competition. And this year, itās in Dallas, Texas ā birthplace of the frozen margarita machine and home to an airport larger than the island of Manhattan… read more.

**From Community To Corporate: Members Becoming Topcoder Employees**

The early 2000s were heady times for young techies in the Indian subcontinent. Big cities were fast becoming IT hubs and thousands of jobs were being generated. Smaller cities, however, were not in the fray of this IT boom… read more.

The post The Top 10 Community Blog Posts from 2018 appeared first on Topcoder.

]]>The post Single Round Match 743 Editorials appeared first on Topcoder.

]]>The intended solution for this problem is simple — test whether C(**pieces**+1, 2) is at most **plankLength** or not. If it is bigger, return āimpossibleā, and otherwise return āpossibleā.

We now sketch why this approach is correct.

Observe that if C(**pieces**+1, 2) > **plankLength** then we can not construct a valid solution. That is, even if all the assigned lengths are the smallest possible (the i-th piece having length i), their sum is C(**pieces**+1, 2).

On the other hand, if C(**pieces**+1, 2) <= **plankLength**, we can set lengths for first **pieces**-1 pieces to be from 1 through **pieces**-1, and the last piece to have length **plankLength**-C(**pieces**, 2) >= **pieces**.

Code in Java:

public String canItBeDone(int plankLength, int pieces) {

if (pieces * (pieces + 1) / 2 <= plankLength)

return "possible";

return "impossible";

}

This task is a graph theory problem. Namely, consider the graph in which the vertex set consists of the rooks. Two vertices, i.e., two rooks, are connected by an edge iff for those two rooks all the three conditions are satisfied (the rooks are in the same row or in the same column, and there are no other rooks between them).

It is not hard to see that in this graph all the rooks that are in one connected component have to be of the same color. To see that, assume that two rooks in the same component have different colors and consider a shortest path between differently colored rooks. On this shortest path there exists an edge whose endpoints have different colors, which violates friendly coloring.

Similarly, if two rooks are not in the same component, then they can be colored by different colors.

Now this tasks comes down to building such graph. The final output is the number of connected components that this graph contains.

To count the number of connected components one can simply run DFS or BFS and that would be more than sufficient to solve this problem in time.

**Note**: Observe that out of those three conditions, the third condition (the one that says that there should be no rooks between two rooks attacking each other) also can be omitted and this task would not get changed. So, for the sake of easier implementation we can as well just ignore this constraint.

Code in Java:

private int[][] vec = {{0, 1}, {0, -1}, {1, 0}, {-1, 0}};

private int n, m;

private boolean[][] done;

private boolean possible(int i, int j) {

return i > -1 && j > -1 && i < n && j < m;

}

private String[] board;

private void mark(int i, int j) {

done[i][j] = true;

for (int k = 0; k < 4; k++) {

int ni = i;

int nj = j;

for (int t = 0; t < Math.max(n, m); t++) {

ni += vec[k][0];

nj += vec[k][1];

if (possible(ni, nj) && board[ni].charAt(nj) == 'R' && !done[ni][nj]) {

mark(ni, nj);

Break;

}

}

}

}

public int getMinFriendlyColoring(String[] board) {

this.board = board;

n = board.length;

m = board[0].length();

done = new boolean[n][m];

for (int i = 0; i < n; i++)

Arrays.fill(done[i], false);

int ret = 0;

for (int i = 0; i < n; i++)

for (int j = 0; j < m; j++)

if (board[i].charAt(j) == 'R' && !done[i][j]) {

ret++;

mark(i, j);

}

return ret;

}

We reduce this task to one that for a given g decides whether there is a pairing C such that each number of C is divisible by g. This question can be solved as follows.

**Testing whether g divides each number of some pairing**:

For a given g, compute **A**[i]%g for each i. Then, the number of i’s such that **A**[i]%g == 0 has to be even. If g is even, then the number of i’s such that **A**[i]%g == g/2 has to be even as well. For all the other modulos of g, the number of i’s and the number of j’s such that **A**[i]%g == k and **A**[j]%g == g-k has to be the same.

So, we now know how to decide whether there exists a pairing that certifies that the output to the problem is at least g. But how to find all the relevant values of g for which we should run the above procedure? One choice is to try all the values 1 through 2 * 10^9 in place of g, but that would be too slow. Next we discuss how to select relevant but small number of candidates for g.

**A small number of relevant gās**:

Observe that **A**[0] has to be paired with some element of **A**. So, the final output has to divide **A**[0]+**A**[i] for some i. In our solution, for each of the divisors d of **A**[0]+**A**[i] we ask whether there exists a pairing as described above (for g = d). Luckly, the number of positive divisors of an integer X is at most 2 * sqrt(X).

In our case, that is at most 2 * sqrt(2 * 10^9) many divisors of **A**[0]+**A**[i], for any fixed i. Since there are at most 30 such candidates **A**[0]+**A**[i], the total number of divisors of interest is at most 30 * 2 * sqrt(2 * 10^9). For each of those divisors we invoke the procedure described above. The largest of those candidates for which the procedure above reports that there exists the corresponding pairing is the output of the solution.

Code in Java:

private ArrayList <Integer> candidates;

public int maximumGCDPairing(int[] A) {

int n = A.length;

candidates = new ArrayList < Integer > ();

for (int i = 1; i < n; i++) {

int val = A[0] + A[i];

int d = 1;

while (d * d <= val) {

if (val % d == 0) {

candidates.add(d);

candidates.add(val / d);

}

d++;

}

}

int ret = -1;

for (Integer candidate: candidates) {

if (candidate < ret) Continue; ArrayList < Integer > rems = new ArrayList < Integer > ();

int cntZero = 0, cntHalf = 0;

for (int val: A) {

if (val % candidate == 0)

cntZero++;

else if (2 * (val % candidate) == candidate)

cntHalf++;

else

rems.add(val % candidate);

}

if (cntZero % 2 == 0 && cntHalf % 2 == 0) {

// at this points, it holds rems.size() % 2 == 0

Collections.sort(rems);

boolean OK = true;

for (int i = 0; i < rems.size() / 2; i++)

if (rems.get(i) + rems.get(rems.size() - i - 1) != candidate) {

OK = false;

break;

}

if (OK)

ret = candidate;

}

}

return ret;

}

We solve this problem by using dynamic programming. We define dp state by the triples (idx, B, L) with the following meaning

dp[idx][B][L] = the solution to the problem where:

- The signs for indices 0 ā¦ idx-1 are already chosen (i.e., they are deterministic). Let F[i] be the value of the i-th index. Note that F[i] =
**sequence**[i] or F[i] = –**sequence**[i]. - The largest sum of a contiguous subsequence of F[0 ā¦ idx-1] is B.
- The largest sum of a contiguous subsequence of F[0 ā¦ idx-1] that contain F[idx-1] is L.

The output to this problem is dp[0][0][0].

Let n be the length of **sequence**. Then, as the boundary condition, we have

dp[n][B][L] = B, for any L

It is also not hard to define the transitions from one to another state. Namely, from state (idx, B, L) there are two possible transitions.

- Either we set F[idx] =
**sequence**[idx], which happens with probability 1-**probMinus**[idx]/100. Then we define L1 = L +**sequence**[idx] and transition to (idx + 1, max(B, L1), L1). - Or, we set F[idx] = –
**sequence**[idx], which happens with probability**probMinus**[idx]/100. Then we define L2 = L –**sequence**[idx] and transition to (idx + 1, B, max(L2, 0)).

Finally, we have

dp[idx][B][L] = (1-**probMinus**[idx]/100) * dp[idx + 1][max(B, L1)][L1] + **probMinus**[idx]/100 * dp[idx + 1][B][max(L2, 0)]

Note that since each element ranges from 0 to 50 and it never pays off to have subarrays of negative value, there are at most 2501 possibilities for the second and the third dimension of dp. **sequence** is of length at most 50, so there are at most 50 possibilities for first dimension. Furthermore, from each state we make only two transitions. So, the overall running time of the solution above is 2 * 50 * 2501 * 2501, which runs in time. However, defining the full dp as described above would require too much memory.

To reduce the memory, we use a standard trick. Instead of storing all the 50 coordinates for the first dimension of dp at each step of the algorithm, we store only two consecutive ones. This is possible as to feel dp[idx] we only need information from dp[idx + 1]. This would be sufficient to cut the space by factor of 25 which is good enough to store this compressed dp table in memory.

**Alternative explanation** (courtesy of misof):

Another point of view is as follows. We can read the input from left to right and run Kadane’s algorithm in parallel on all possible sequences. After each step, the dp values simply represent the probabilities with which each state of the algorithm was reached. Here, state is the pair

(best solution so far, best solution that ends at the end of what we processed).

Code in C++:

public double solve(int[] sequence, int[] probMinus) {

double[][][] pp = new double[2][2501][2501];

int N = sequence.length;

int S = 0;

for (int n = 0; n < N; ++n) S += sequence[n];

for (int a = 0; a <= S; ++a)

for (int b = 0; b <= S; ++b)

pp[0][a][b] = 0;

pp[0][0][0] = 1;

for (int n = 0; n < N; ++n) {

int old = n % 2, nju = 1 - old;

for (int r = 0; r <= S; ++r)

for (int s = 0; s <= S; ++s)

pp[nju][r][s] = 0;

for (int r = 0; r <= S; ++r)

for (int s = 0; s <= S; ++s)

if (pp[old][r][s] > 0) {

double pM = probMinus[n] / 100.;

int nr1 = r + sequence[n];

int ns1 = Math.max(s, nr1);

pp[nju][nr1][ns1] += (1 - pM) * pp[old][r][s];`int nr2 = Math.max(0, r - sequence[n]);`

`int ns2 = Math.max(s, nr2);`

`pp[nju][nr2][ns2] += pM * pp[old][r][s];`

`}`

}

double answer = 0.;

for (int r = 0; r <= S; ++r)

for (int s = 0; s <= S; ++s)

answer += s * pp[N % 2][r][s];

return answer;

}

This is a mix of graph theory and dynamic programming. We split the explanation in two parts. First, we describe how to test whether there exists *any* permutation satisfying the constraints. Then we show how to use this procedure to construct the lexicographically smallest one.

**Verifying whether there exists any permutation**:

Make a graph on n vertices, vertex i corresponding to the i-th index of the output. Two vertices u and v are connected iff there exists i such that **x**[i]==u and **y**[i]==v. If **d**[i]==1, then color the edge {u, v} in red and otherwise color {u, v} in blue.

Now, for every component, we start from an arbitrary vertex and assign label 0 to it. Then we run DFS to assign labels to the other vertices as well. If two vertices are connected by the blue edge, then they should have the same labels; otherwise, one vertex should have label 0 and the other one label 1. This also implies that the same label *within* the same component denotes vertices of the same parity. If at any point we discover an edge that does not follow these rules, we conclude that there is no such array P.

Once we have all the 0-1 labels, for every component we count the number of labels being 0 and the number of labels being 1. For component comp, let those counts be cnt0[comp] and cnt1[comp]. Then, to each comp we should either assign cnt0[comp] even and cnt1[comp] odd numbers of **val**; or the other way around, i.e., we should assign cnt0[comp] odd and cnt1[comp] even numbers of **val**. Moreover, we should make these assignments for all the components simultaneously, and each element of **val** should be assigned to exactly one component. If this is not possible to achieve, then there is no required P.

Let E be the number of even and O be the number of odd numbers of **val**. We perform a dynamic programming to check whether there is a way to assign E and O across the components as described. This is a standard DP problem that can be implemented to run in O(E * number_of_components) time.

We now describe how to reconstruct the lexicographically smallest P.

**Reconstruction of lexicographically smallest P**:

We begin by explaining how to reconstruct P[0]. Let comp be the component containing vertex 0. Without loss of generality, assume that vertex 0 is labeled by 0 in comp. Let S_even be the smallest even and S_odd be the smallest odd number of **val**. (If S_even or S_odd does not exist, set its value to infinity.) Assume that S_even < S_odd. Then, we check whether there is a permutation P such that P[0]=S_even. To check that, we test whether we can assign cnt0 and cnt1 of the components other than comp so that the total number of even values and the total number of odd values of **val** in this assignments is E-cnt0[comp] and O-cnt1[comp], respectively. Here we are subtracting cnt0[comp] from E (and also cnt1[comp] from O) as we are assuming that all the vertices of comp having the same parity as vertex 0 can be assigned to have even values of **val**.

If there does not exist such assignment, then we set P[0]=S_odd. Note that to verify whether such assignment exists we can use the same method as we used in the first part.

When we set the value of P[0] (that is S_even or S_odd), then we mark which vertices in comp should be set to even values and which to odd values. Notice that this is uniquely determined once we decide the parity of vertex 0. **Note**: at this point we only mark the parity of the remaining vertices in comp, but not assign any values to them yet.

The analogous approach is applied if S_odd < S_even.

The rest of the reconstruction of P is obtained in a similar way. Namely, assume that so far we have reconstructed P[0], ā¦, P[idx – 1], and next we want to reconstruct P[idx]. If the parity of vertex idx was already decided, then we let P[idx] be the smallest unused value of **val** which has the same parity as vertex idx.

If the parity of vertex idx has not been decided yet, then we apply the same procedure over idx as we applied over vertex 0 above. We only have to take into account that some of the components has been processed already.

The code of the solution mentioned above can be found here.

The post Single Round Match 743 Editorials appeared first on Topcoder.

]]>The post Single Round Match 742 Editorials appeared first on Topcoder.

]]>During SRM I was solving tasks in C++ but, since I am currently learning Haskell, I will also present my solutions in Haskell.

To figure out how many candies will Elisa get if she picks specific candy bag, we do the whole number division with (K + 1) to learn how many rounds of giving candies Elisa will do. Then, we take the rest of the candies (which is remainder from the whole division with (K + 1) and sum it up with the number of rounds and that is the total number of candies Elisa would get from that bag of candies. We do this for every bag of candies and return the biggest number.

```
#include <vector>
using namespace std;
class BirthdayCandy {
public:
int mostCandy(int K, vector <int> candy) {
int maxCandy = -1;
for (int i = 0; i < (int) candy.size(); i++) {
const int numCandy = candy[i] / (K + 1) + candy[i] % (K + 1);
if (numCandy > maxCandy) maxCandy = numCandy;
}
return maxCandy;
}
};
```

It is really easy to describe this solution in Haskell, resulting in just one line of logic!

```
module BirthdayCandy (mostCandy) where
mostCandy :: Int -> [Int] -> Int
mostCandy k = maximum . map (\c -> c `div` (k + 1) + c `mod` (k + 1))
```

At first, this one seems hard, if we want to make sure that we arrange queens in an optimal way so that the biggest amount of them can fit on the board.

However, if we look at the constraints (board size and the maximal number of queens), we can figure out that the board is so big that we can just put queens one by one in a greedy manner on it and there will always be enough space for all of them.

Therefore, we go with the greedy solution where we, for each queen that we want to add, go through all the fields on the board until we find the first field that is available (not under attack by queens that are already on the board).

Then we put the new queen on the board and continue with the next queen.

To figure out if the field is under attack we check that it is not on the same diagonal, row or column as any other queen.

Nice way to check if two fields are on the same diagonal is by checking if the absolute difference of their row indexes is equal to the absolute difference of their column indexes. If it is, they are on the same diagonal, otherwise, they are not.

```
#include <vector>
#include <cstdlib>
using namespace std;
class SixteenQueens {
public:
vector <int> addQueens(vector <int> row, vector <int> col, int add) {
vector<int> result;
for (int i = 0; i < add; i++) { // For each queen that we want to add.
bool done = false;
for (int r = 0; r < 50 && !done; r++) { // Let's try every field on the board. Every row.
for (int c = 0; c < 50 && !done; c++) { // And every column.
bool fieldOk = true; // Is field ok to put queen on it?
// Check if any of previous queens is compromising the field.
for (int j = 0; j < (int) row.size() && fieldOk; j++) {
if (r == row[j] || c == col[j] || (abs(r - row[j]) == abs(c - col[j]))) {
fieldOk = false; break;
}
}
if (fieldOk) { // If field is ok to go, put queen on it.
row.push_back(r); col.push_back(c); // Add to queens on board.
result.push_back(r); result.push_back(c); // Add to results.
done = true;
}
}
}
}
return result;
}
};
```

This problem can naturally be described recursively, which results in a very elegant solution in Haskell.

```
module SixteenQueens (addQueens) where
addQueens :: [Int] -> [Int] -> Int -> [Int]
addQueens row col 0 = []
addQueens row col add = rAdd:cAdd:(addQueens (rAdd:row) (cAdd:col) (add - 1))
where (rAdd, cAdd):_ = [(r, c) | r <- [0..49], c <- [0..49], not (isUnderAttack (r, c))]
isUnderAttack (r, c) = any (\(r', c') -> r == r' || c == c' || abs (r - r') == abs (c - c')) (zip row col)
```

This task I was not able to finish during the SRM – I did come up with the solution and started implementing it, but āthinking partā took me a long time and I did not have enough time left to finish it. I finished it later and confirmed in the practice room that it works correctly.

While the problem is in itself simple – combine resistors to achieve wanted value – the solution is not so obvious.

The way problem is stated made me consider if there is a dynamic programming solution, however, I concluded that search space is just too big and I could not see any smart way to narrow down that search. There are just too many choices – resistor values can be real numbers, we can do series or parallel, we can combine any two resistors that we have built so far.

Therefore, I decided to somehow simplify the situation.

First, we can observe that if we put a resistor in series with itself, we get a resistor with double the resistance, and if we put it in parallel, we get a resistor with half the resistance. This means that if we start with a resistor of resistance R, we know how to easily create resistors of resistance 2^x * R where x is an integer.

Since we start with a resistor of 10^9 nano ohm (all the values from now on I will express in nano-ohms), we can create 2 * 10^9 resistor by putting it in series with itself, or we can create 0.5 * 10^9 resistor by putting it in parallel with itself. We can then repeat this process with newly created resistors to create smaller and bigger resistors.

Next, we can observe that if we have an inventory of resistors with various values, we can just put some of them in series and probably get pretty close to the target value. But, what kind of values should those be, so that we can actually do that?

- We need a resistor ofĀ
**small enough**Ā value that we can achieve any possible target value with high enough precision. Precision defined in the problem is 1 nano ohm. - We need resistors to haveĀ
**big enough**Ā values so that we don’t need to combine too many resistors in series since limit stated by the problem is 1000 commands, which means that we can’t combine more than 1000 resistors.

We start our inventory with the only resistor we have at the beginning, resistor #0 with a resistance of 10^9 nano-ohms.

Combining the observations so far, we can see that if we repeatedly create smaller and bigger resistors as described before (by doubling and halving), we will cover the space of possible target resistor values (from 0 to 10^18 nano-ohms) with values that are logarithmically (base 2) spaced.

To cover that space densely enough and to also be sure that we have small enough resistor to always be able to obtain the needed precision, we can keep halving the smallest resistor in the inventory and doubling the biggest resistor in the inventory until the smallest resistor is 2^-30 * 10^9 (~ 0.001) nano-ohms and the biggest resistor is 2^29 * 10^9 (~0.54 * 10^18) nano-ohms.

Now, let’s observe the commands needed to build such inventory.

First,Ā **let’s build resistors from 2 * 10^9 to 2^29 * 10^9**, that is 29 resistors. We can do this by repeatedly putting the last resistor in inventory in series with itself, 29 times.

This results in following commands:

`[(0, 0, 0), (1, 1, 0), ..., (27, 27, 0), (28, 28, 0)]`

Next,Ā **let’s build resistors from 2^(-1) * 10^9 to 2^(-30) * 10^9**, that is 30 resistors. The first resistor we build by putting the resistor #0 in parallel with itself and then we repeatedly put the last resistor in inventory in parallel with itself, 29 times.

This expands our list of commands to a total of 59 commands:

`[(0, 0, 0), (1, 1, 0), ..., (27, 27, 0), (28, 28, 0), (0, 0, 1), (30, 30, 1), (31, 31, 1), ..., (59, 59, 1)]`

Values of created resistors (in ohms, not in nano-ohms!):

`[2^1, 2^2, ..., 2^28, 2^29, 2^-1, 2^-2, 2^-3, ..., 2^-30]`

**Does this inventory satisfy our needs from before?**

The smallest resistor is smaller than required precision, so that is ok. The only question that remains is, are we sure that we can always build target resistor with less than 1000 – 59 = 941 commands by using this inventory of resistors that we created?

First, let’s see how we can use our resistor inventory to build the target resistor.

The problem states that the last command in our list of commands has to be the one constructing target resistor.

With inventory just freshly built, the last command is currently the one that constructs resistor of 2^-30 ohms (~0.93 nano-ohms).

If that is not close enough to target resistor, we are going to add to it, in series, another resistor from inventory that will bring us as close as possible to target resistor (which is the biggest resistor that is smaller than the difference between target resistor value and the last resistor in inventory). We are going to repeat this process until we construct the resistor that is close enough to target resistor, andĀ **that is it**!

Now, to answer the question from before, if we can guarantee that we are not going to need more than 1000 commands in total.

Since distance between our inventory resistors is logarithmic with base 2 it means that with each resistor we add to series while building target resistor as described above we are halving the mistake between our current best resistor and target resistor, which means that a few dozen steps (<= 60) will be always enough to build it!

This means we are in total never going to have more than 59 + 60 =Ā **119 commands**, which is way below the limit of 1000.

That is all! Although slightly complex, this solution performs very well in terms of speed and is reliable. I do wonder if there is a simpler solution or solution that returns the smallest number of commands but have not thought of one yet.

```
#include <vector>
using namespace std;
class ResistorFactory {
public:
vector <int> construct(long long nanoOhms) {
vector<int> commands;
vector<double> values;
values.push_back(1000000000.0); // Product 0 is 10^9 ohm.
for (int i = 0; i <= 28; i++) { // Products 1 to 29, each next is 2 times bigger.
commands.push_back(i); commands.push_back(i); commands.push_back(0);
values.push_back(((double) values[values.size() - 1]) * 2);
}
// Product 30, which is 10^9 / 2 (product #0 / 2).
commands.push_back(0); commands.push_back(0); commands.push_back(1);
values.push_back(values[0] / 2);
for (int i = 30; i <= 58; i++) { // Products 31 to 59. each is 2 times smaller.
commands.push_back(i); commands.push_back(i); commands.push_back(1);
values.push_back(((double) values[values.size() - 1]) / 2);
}
// Inventory is built! Now we use our inventory to build the final resistor.
double remaining = nanoOhms - values[values.size() - 1]; // Difference between what we have and target.
while (remaining >= 1) {
int bestIdx = -1; // Biggest resistor that is smaller than remaining amount.
for (int i = 0; i < (int) values.size(); i++) {
if (values[i] <= remaining && (bestIdx == -1 || remaining - values[i] < remaining - values[bestIdx])) {
bestIdx = i;
}
}
commands.push_back(values.size() - 1); commands.push_back(bestIdx); commands.push_back(0);
values.push_back(values[values.size() - 1] + values[bestIdx]);
remaining -= values[bestIdx];
}
return commands;
}
};
```

First Haskell version I came up with ended up pretty big, due to me writing very expressive code. This comes natural when writing Haskell because it is so easy to define functions and data types.

```
module ResistorFactory (construct) where
import Data.List (foldl', maximumBy)
precisionInNanoOhm = 1 :: Double
data ResistorBuild = Parallel Resistor Resistor | Series Resistor Resistor | OnePiece
data Resistor = Resistor { getResistorId :: Int, getValueInNanoOhm :: Double, getBuild :: ResistorBuild }
resistorToCommand :: Resistor -> (Int, Int, Int) -- Transforms resistor into format expected by Topcoder as result.
resistorToCommand (Resistor _ _ (Series r1 r2)) = (getResistorId r1, getResistorId r2, 0)
resistorToCommand (Resistor _ _ (Parallel r1 r2)) = (getResistorId r1, getResistorId r2, 1)
resistorToCommand _ = error "Can't be transformed to command!"
createFromSeries :: Int -> Resistor -> Resistor -> Resistor
createFromSeries id r1 r2 = Resistor id (getValueInNanoOhm r1 + getValueInNanoOhm r2) (Series r1 r2)
createFromParallel :: Int -> Resistor -> Resistor -> Resistor
createFromParallel id r1 r2 = Resistor id (v1 * v2 / (v1 + v2)) (Parallel r1 r2)
where (v1, v2) = (getValueInNanoOhm r1, getValueInNanoOhm r2)
type Inventory = [Resistor]
resistor0 = Resistor 0 (10^9) OnePiece -- This is the resistor we start with, 1 ohm.
initialInventory :: Inventory
initialInventory = [resistor0] -- Initial resistor.
-: (extendInventory createFromSeries [1..29]) -- Big resistors (> 10^9 nano ohm).
-: ((createFromParallel 30 resistor0 resistor0):) -- First small resistor.
-: (extendInventory createFromParallel [31..59]) -- Other small resistors (< 10^9 nano ohm).
where
-- For each of given ids, it takes last resistor from inventory, creates new one with that id from it and
-- adds it to the end of that same inventory, repeating the process.
extendInventory create ids inv = foldl' (\i@(r:_) id -> (create id r r):i) inv ids
a -: f = f a
construct :: Integer -> [(Int, Int, Int)]
construct target = transformResult $ construct' initialInventory (fromIntegral target)
where transformResult = map resistorToCommand . tail . reverse
-- Given initial inventory and value of target resistor, it will return inventory in which
-- last resistor has that value (within defined precision).
construct' :: Inventory -> Double -> Inventory
construct' inv@(lastResistor:_) target = if diff < precisionInNanoOhm
then inv -- We are done.
else construct' (newResistor:inv) target
where
diff = target - (getValueInNanoOhm lastResistor)
closestResistor = maximumBy (\r1 r2 -> compare (getValueInNanoOhm r1) (getValueInNanoOhm r2))
$ filter (\r -> getValueInNanoOhm r <= diff) inv
newResistor = createFromSeries (getResistorId lastResistor + 1) lastResistor closestResistor
```

I also refactored it to make it shorter but less expressive and therefore much more similar to C++ version. I prefer the first, more expressive version.

```
module ResistorFactory (construct) where
import Data.List (foldl', maximumBy)
initialInventory :: [(Int, Int, Int, Double)]
initialInventory = [(undefined, undefined, undefined, 10^9)] -- Initial resistor.
-: (extendInventory (\id v -> (id, id, 0, v * 2)) [0..28]) -- Big resistors (> 10^9 nano ohm).
-: ((0, 0, 1, 10^9 / 2):) -: (extendInventory (\id v -> (id, id, 1, v / 2)) [30..58]) -- Small resistors.
where extendInventory create ids inv = foldl' (\i@((_,_,_,v):_) id -> (create id v):i) inv ids
a -: f = f a
construct :: Integer -> [(Int, Int, Int)]
construct target = transformResult $ construct' initialInventory (fromIntegral target)
where transformResult = map (\(id1, id2, c, v) -> (id1, id2, c)) . tail . reverse
construct' :: [(Int, Int, Int, Double)] -> Double -> [(Int, Int, Int, Double)]
construct' inv@(lastRes:_) target = if diff < 1 then inv else construct' (newRes:inv) target
where lenInv = length inv
diff = target - (value lastRes)
(bestResId, bestRes) = maximumBy (\(_, r1) (_, r2) -> compare (value r1) (value r2))
$ filter (\(_, r) -> value r <= diff) $ zip [lenInv - 1, lenInv - 2 .. 0] inv
newRes = (lenInv - 1, bestResId, 0, value lastRes + value bestRes)
value (_,_,_,v) = v
```

The post Single Round Match 742 Editorials appeared first on Topcoder.

]]>The post Decemberās Big Give: Bonus Prizes Just For Being You! appeared first on Topcoder.

]]>Thatās why weāre bringing you Decemberās Big Give: weekly rewards for our community members just for competing!

**What does that mean?
**Every week in December, weāll be doing a drawing for a bonus cash prize and all you have to do to be entered in the drawing is submit on any Topcoder challenge (including Marathon Matches and Single Round Matches) within that week (tasks are excluded from this contest).

**Prizes and Raffle Dates Timeline
**Decemberās Big Give will kick off Wednesday, November 28 and every Wednesday weāll pick a winner. Then on January 2, 2019, weāll announce our monthly winners!

All winners will be announced in the weekly newsletters and on the website. If you donāt get our newsletters, this may be a good time to subscribe! Go here to subscribe.

Now about the prizesā¦ Every Wednesday during Decemberās Big Give, three Topcoder members will win $250 plus a Topcoder t-shirt.

We will select 100 members on January 2 who will also win a Topcoder t-shirt.

Shirts & cash!

**Eligibility**

In order to qualify to win a prize during Decemberās Big Give, you must meet the general eligibility criteria on our terms.

**Winners:**

*Week 1!*

kishore_g84

ramil.lim

TenkoChabashira

*Week 2!*

winterflame

Jackgid16

sidthekid

Weekly contests will start on Wednesdays at 00:00 and end at 23:59 Tuesday evenings.

Ask questions here.

The post Decemberās Big Give: Bonus Prizes Just For Being You! appeared first on Topcoder.

]]>The post Bringing the NASA ISS Food Intake Tracker App to Life appeared first on Topcoder.

]]>I came back in 2010 and, more motivated, started gaining experience participating in challenges and got a few second and third places. Finally, in October of 2010, I managed to win my first challenge. This made me very happy and gave me the confidence to continue working on Topcoder.

In 2011, I started working with the Assembly Competitions at Topcoder, a more complex development track, and this kept me very busy for quite some time. Most of the challenges I participated in were related to Java backend. However, Topcoder started posting challenges targeting mobile devices. That caught my attention immediately, as it was a new technology being introduced. I even decided to buy an iPad to start participating in the challenges, as many of them required that the code run on a device and not only in the simulator.

In 2013, the NASA ISS Food Intake Tracker (IFIT) iPad application challenges started to appear. First the conceptualization challenge, then design, UI prototype, prototype conversion, etc. However, due to the limited amount of time I had to work for Topcoder challenges I wasnāt able to compete in those challenges.

My first break in the project came in 2014, when Rashid Sial, Senior Manager of App Development at Topcoder, posted a First2Finish challenge to fix some issues in the iPad application ā and I havenāt let go ever since.

The project was very exciting since it involved NASA, for which I have a great personal admiration, and mobile device development. However, the first big task I had to solve was the performance of the application.

The mobile application would allow users to log into any iPad device available without a password, and should also work offline without connection to the backend server. The offline functionality posed a real challenge in keeping the data synchronized between all devices. The original solution was to use a Samba server where files would be stored with the data entered by the astronauts ā and removed when all iPads consumed that information.

The performance with this solution wasnāt very good, and from time to time we had some lockups in the iPad application. We decided to replace the Samba server with a database server. The iPad application was going to communicate directly with the database to store the data and a synchronization table was going to be used to keep all the new records entered available so that all iPad devices could have up-to-date information. As soon as the devices were synchronized, the records would be removed.

In parallel, NASA also requested a simple admin web page. This tool would allow NASA to insert, delete, and update user and food records. Also, the tool would allow the bulk upload of data from a CSV file.

Again, the solution presented its limitations, and we decided to decouple the iPad application from the database. Then, a REST service was created that would expose all the needed endpoints to the iPad application and communicate with the database.Today, that is the architecture of the application.

I have to point that throughout the development process, a very close communication with the NASA team was maintained. They provided priceless feedback from their tests and also added new requirements.

Also, some bug hunts were needed to find all the issues in the code and point to possible improvements. Eventually, one member was selected to continue working with the QA section of the project. A special thank you goes to Topcoder member nithyaasworld, who helped a lot to find my mistakes ā of which there were quite a few! I started growing paranoid about QA because of that, but this process made me realize how important it was for the success of the project.

Finally, when NASA approved the IFIT iPad application and the admin web tool we moved to the last part, which was deploying the backend code to NASAās servers. Since we didnāt have access to their servers, a virtual machine was used with the same operating system to be used aboard the ISS. In the end, a deployment script was developed to allow all necessary software to be installed and configured.

And it was with great pleasure that we learned that the iPad with the IFIT application ā along with the backend and database servers ā had made its way to the ISS. We even received this quote from an astronaut: āI use IFIT every day, for every meal ā it’s awesome.ā That made me very proud.

In October 2017, we received the news that the project had won NASAās Group Achievement Award. This was a great news and showed how Topcoderās crowdsourcing community and platform can bring to life such an amazing and life-changing application. But the biggest surprise was when Rashid contacted me just this month to say that he had received our certificates. I said, *What?!* Nevertheless, this was really over the top.

This has been an amazing year for me at Topcoder. Iāve finally made it as a copilot and now, this achievement award. And the project is ongoing; there is still work to be done to bring more stability to the system and help astronauts keep in shape!

*Final NASA IFIT app designs*

I want to give a *huge* thanks to Rashid, who gave me the opportunity to work on such an amazing project ā and who helped me in providing feedback on issues and organizing all the things developers donāt want to handle.

This was a project that started with the Topcoder Community and āendedā with a developer from Brazil who, despite working two full-time jobs (my day job and my dad ājobā), managed to squeeze some extra hours in during the day and night to help bring this project, and other projects, to life.

The post Bringing the NASA ISS Food Intake Tracker App to Life appeared first on Topcoder.

]]>