The problem of correspondence search (or matching) remains a central focus of research in a variety of domains of computer vision. To date, the overwhelming tendency in the literature has been to rely on either the assumption of brightness constancy, or on the assumption of patchwise local surface planarity. However, among aspects of matching that render the task challenging as displacements become large, there figure the tendency for these two assumptions to ultimately break down, together with the combinatorial explosion of the resulting space of possible motions, or labels. In this thesis, we tackle three domains that call for performing matching at large displacements: (i) depth super resolution (SR), (ii) RGB-D scene flow, and (iii) optical flow, whereby we carry out correspondence search (i) within a single depth map, (ii) across a pair of RGB-D frames, and (iii) across a pair of color images, respectively. For optical flow and RGB-D scene flow, we relax reliance on brightness constancy by focusing attention in the available color images on image gradients. Recognizing for all three that 2D motion in the image plane is ultimately a function of 3D motion in the underlying scene, we overparameterize the 2D motions that patches of pixels are allowed to undergo in the image plane in terms of 3D rigid body motions applied to patches of 3D points: a geometrically motivated form of correspondence search. For depth SR and RGB-D scene flow, we apply these 3D rigid body motions directly to patches of 3D points encoded in the available depth maps, thereby overcoming the assumption of local surface planarity; for optical flow, we recover 3D points for all pixels of a patch of pixels by intersecting respective pixel viewing rays with a 3D plane, and have the resulting 3D points undergo a 3D rigid body motion. In the former case, we accordingly seek-individually for each pixel-a 6 degree of freedom (DoF) 3D rigid body motion describing the motion of the patch of points associated with the pixel; in the latter, we similarly assign to each pixel a 3 DoF 3D plane in addition to a 6 DoF rigid body motion, for a total of 9 DoF. In both cases, even a coarse discretization of these high-dimensional, continuous label spaces would lead to spaces of a daunting number of labels to consider exhaustively as displacements become large, a problem we succeed in overcoming by instead calling on variants of the simple yet effective PatchMatch correspondence search algorithm to grow and refine sparse correspondence seeds in a manner tailored to our 6 DoF and 9 DoF motion models.