Problem statement

https://leetcode.com/problems/maximum-and-sum-of-array/

Solution 1

Be careful about problem constarins: we have numbers <= 15, not n <= 15. First idea is to use bitmask with n length bitmask, but it will lead to TLE.

Another idea is to start with slots, not numbers, because we can have small number of them. For each slot we will keep 2 bits: 00 for empty, 01 for occupied once and 11 for occupied twice. Length of mask is 2t, but in fact we will have only 3^t of them.

Complexity

Time complexity is O(3^t * t * n), space is O(3^t * n).

Code

class Solution:
    def maximumANDSum(self, nums, t):
        @lru_cache(None)
        def fn(k, m): 
            if k == len(nums): return 0 
            ans = 0 
            for i in range(t): 
                if m & 1<<2*i == 0 or m & 1<<2*i+1 == 0: 
                    m2 = m ^ 1<<2*i+1 if m & 1<<2*i else m ^ 1<<2*i
                    ans = max(ans, (nums[k] & i+1) + fn(k+1, m2))
            return ans 
        
        return fn(0, 0)

Solution 2

In fact there is polynomial time solution, using hungarian algorithm! Imagine that we have 3 slots, then create numbers 1, 2, 3, 1, 2, 3 for them and imagine also that we have 5 values 1, 4, 2, 8, 5. Then add 0 to them and we have 1, 4, 2, 8, 5, 0. Now, we need to choose 6 elements such that no two cells are on the same row or column.

Complexity

It is O(t^3) for time and O(t^2) for space.

Code 1

class Solution:
    def maximumANDSum(self, nums, t):
        slots = list(range(1, t+1)) * 2
        nums += [0]*(len(slots) - len(nums))
        cost = [[(x&y) for y in slots] for x in nums]
        return hungarian(cost)[1]

def hungarian(G, T=1e-6):
    n = len(G)
    U, V = range(n), range(n)
    mu, mv = [None] * n, [None] * n
    lu = [max(row) for row in G]
    lv = [0] * n
    for root in U:
        au = [False] * n
        au[root] = True
        Av = [None] * n
        slack = [(lu[root] + lv[v] - G[root][v], root) for v in V]
        while True:
            (delta, u), v = min((slack[v], v) for v in V if Av[v] == None)
            if delta > T:
                for u0 in U:
                    if au[u0]: lu[u0] -= delta
                for v0 in V:
                    if Av[v0] != None: lv[v0] += delta
                    else:
                        (val, arg) = slack[v0]
                        slack[v0] = (val - delta, arg)

            Av[v] = u
            if mv[v] == None: break
            u1 = mv[v]
            au[u1] = True
            for v1 in V:
                if Av[v1] == None:
                    slack[v1] = min((lu[u1] + lv[v1] - G[u1][v1], u1), slack[v1])
        while v != None:
            u = Av[v]
            prec = mu[u]
            mv[v] = u
            mu[u] = v
            v = prec
    return (mu, sum(lu) + sum(lv))

Code 2

from scipy.optimize import linear_sum_assignment as LA

class Solution:
    def maximumANDSum(self, nums, t):
        cost = [[-(x&y) for y in list(range(1, t+1)) * 2] for x in nums + [0]*(2*t - len(nums))]
        return -sum(cost[r][c] for r, c in zip(*LA(cost)))