• Skip navigation
  • Skip to navigation
  • Skip to the bottom
Simulate organization breadcrumb open Simulate organization breadcrumb close
Chair of Multimedia Communications and Signal Processing
  • FAUTo the central FAU website
  1. Friedrich-Alexander-Universität
  2. Technische Fakultät
  3. Department Elektrotechnik-Elektronik-Informationstechnik
Suche öffnen
  • de
  • EEI
  • Mein Campus
  • UnivIS
  • StudOn
  • CRIS
  • GitLab
  1. Friedrich-Alexander-Universität
  2. Technische Fakultät
  3. Department Elektrotechnik-Elektronik-Informationstechnik

Chair of Multimedia Communications and Signal Processing

Navigation Navigation close
  • Chair
    • Staff
    • Scientific Orientation
    • Competences
    • Cooperation
    • Equipment
    • Contact
    Chair
  • Research
    • Fields of Activity
    • Publications
    • Patents
    • Downloads
    Research
  • Study and teaching
    • Lectures
    • Theses
    • FAQ
    Study and teaching
  • News
    • Chair
    • Seminar Presentations
    • Awards
    • Jobs
    • Events
    News
  1. Home
  2. Research
  3. Fields of Activity
  4. Machine Learning in Signal Processing
  5. Always-on Deep Neural Networks

Always-on Deep Neural Networks

In page navigation: Research
  • Fields of Activity
    • Video Signal Processing and Transmission
    • Audio and Acoustic Signal Processing
    • Machine Learning in Signal Processing
      • Always-on Deep Neural Networks
      • Sim2Real
  • Publications
  • Patents
  • Downloads

Always-on Deep Neural Networks

Always-on Deep Neural Networks

Contact
Rohan Asthana, M.Sc.
E-Mail: rohan.asthana@fau.de
Link to person
The recent deep neural network architectures are complex with large demands on computational resources. Consequently, deploying a deep neural network model on hardware-constrained devices is remains a challenge. To address this problem, the network architectures have to be re-designed by taking into consideration the storage, floating-point operations, and parameter discretization factors. This process is known as neural network compression. Besides, the Edge hardware needs to be investigated and redefined for efficient neural network operations. In particular, specialized integrated-circuit (IC) accelerators can provide large adaptability regarding the memory hierarchy and can exploit medium-precision mixed-signal compute circuitry to drive down the power consumption. This DFG project aims to find new mixed-signal circuits and architectures with a runtime-tunable compute precision, as well as the design and training of a hardware-fitted hybrid-precision neural network on such hardware. In the frame of custom hardware, neural network compression will be explored as a co-design and co-train task where the hardware will be part of the optimization.

 

Collaboration Partners: Prof. V. Belagiannis (FAU), Prof. M. Ortmanns (Universität Ulm)

This project is funded by DFG . Project number-493129587.

Chair of Multimedia Communications and Signal Processing
Cauerstr. 7
91058 Erlangen
  • Imprint
  • Privacy
  • Accessibility
  • Facebook
  • RSS Feed
  • Twitter
  • Xing
Up