In the general linear-programming problem, we are given an m × n matrix A, an m-vector b, and an n-vector c. We wish to find a vector xof n elements that maximizes the objective function subject to the m constraints given by Ax ≤ b.
In a system of difference constraints, each row of the linear-programming matrix A contains one 1 and one -1, and all other entries of Aare 0. Thus, the constraints given by Ax ≤ b are a set of m difference constraints involving n unknowns, in which each constraint is a simple linear inequality of the form
xj - xi ≤ bk,
The idea is that in a system Ax ≤ b of difference constraints, the m × n linear-programming matrix A can be viewed as the transpose of an incidence matrix for a graph with n vertices and m edges.
Given a system Ax ≤ b of difference constraints, let G = (V, E) be the corresponding constraint graph. If G contains no negative-weight cycles, then
is a feasible solution for the system. If G contains a negative-weight cycle, then there is no feasible solution for the system.