During the last years we have increasingly seen passive consumers transformed into active producers, often as part of Web 2.0. Newspapers for example turn their readers into producers by motivating them to send in photographs of events that are not covered by a newspaper's journalist or photograph. In the same time span smartphones have become more and more powerful, allowing for high-definition video recording. However, on-site mobile video post-production capability has not yet followed this trend, and on-the-fly video editing is not a common approach among amateur or professional content producers. The central question of this thesis is how novel interface and interaction approaches can support mobile video production applications that are feasible for both amateur and professional video editors. While current research focuses mainly on automated or semi-automated film compilation based on algorithmic decisions, this thesis investigates efficient and effective interaction mechanisms for advanced manual mobile video editing. Manual control over the editing process is crucial for upholding the artistic standards an editor expects of his or her final product. Within the scope of the studies presented here three tasks vital for video editing are examined, implemented and evaluated: browsing media assets, trimming media assets and ordering media assets. The requirements for the proposed interfaces and interaction mechanisms were gathered during a collaborative process that included shadowing, interviewing, workflow analysis and literature research. Each interface and interaction mechanism was evaluated separately with professional video editors and regular user without any background in video editing. The evaluations show that professional video editors were confident about the usefulness and feasibility of the proposals, whereas regular users tend to not wanting to edit their videos manually. However, both groups easily understood the rather complex interaction mechanisms. Furthermore, during the interviews and design sessions a lack of formal and applicable notations for touch-based interfaces and interaction mechanisms was identified. This absence is especially hindering when discussing design issues that are not platform specific or covered by any platform so far. Therefore, this thesis proposes an extensible sketching notation for mobile gestures. The proposed notation provides a platform-independent basis for the collaborative design and analysis of mobile interactions. During a conducted evaluation with real-world touch-based applications the notation proved being a feasible tool, however, indicated various starting points for further improvements.