zedif: Courses


We offer training on digital topics within research. You can either choose from our course list below or request special training. Besiders courses we offer, which are marked with our logo, this list also contains courses to similar topics but by other providers and we keep it up to date as best we can.

Current & Upcoming

  • description:

    GitLab is a web application for managing Git repositories. Since it is build around to Git, it is suitable to manage any project that mostly works with plain text files, for example software source code or TeX based documents. With its built-in issue and wiki systems, it can, in certain cases, even be the right tool to for managing a project without any files.

    This course will give you a foundational understanding of GitLab’s features, so that you can make informed decisions on how to use it as a tool.

    During the whole time, learners will follow along the instructors’ demonstrations, putting what they learn immediately into practice.

    It is not necessary to have previous experience with Git. To get the most out of the section on task automation, a very basic understanding of Docker is helpful, but not required.
    core areas:
    • Navigate GitLab
    • Create, use, and delete GitLab projects
    • Collaborate on GitLab projects
    • Automate Tasks in GitLab
    • Manage projects in GitLab
    • Document projects in GitLab wikis

    instructors:
    • Philipp SchĂ€fer
    • AndrĂ© Sternbeck
  • description:

    Within this workshop we will spend only very little time on what LaTeX can do, but will instead concentrate on you actually making your first steps. This workshop alone will likely not be enough for a beginner to use LaTeX in the future without further help or reference, but it should give a good start and includes pointers where to turn for example use.


    core areas:
    • document structure
    • basic formatting
    • symbols and math
    • images and figures
    • citations

    instructors:
    • Frank Löffler
    • Philipp SchĂ€fer
  • description:
     
    This introduction into R includes:
    • General introduction into the environment.
    • Basics of R syntax and objects.
    • Data handling in R.
    • Basic programming in R.
    • Graphics in R.
     
    This workshop addresses researchers interested in R without or with few previous experiences in R. This workshop includes hands- on exercises and a homework assignment.
     
    Requirements:
    For this workshop please install the current versions of R (https://cran.r-project.org/) and RStudio

    Workshop Dates:
    9. and 10. and 16. and 17.1. 2023,  01.00 p.m. - 05.00 p.m. (4 afternoons)
    instructors:
    • Jan Plötner
  • description:

    Code is everywhere - and scientific research is no exception to this. Programming allows researchers to handle large amounts of digital data with ease, to automate tasks that would otherwise be time-consuming or even impossible to do, and to explore new approaches. Programming skills allow you to be more autonomous of pre-existing tools and to tailor your workflow to your own needs.</p>

    Python is one of the world's most popular programming languages, not only but also, for scientific programming. Part of its popularity comes from the fact that is rather easy to learn. But most importantly, you can use Python for a broad range of tasks, e.g. text analysis, sequence analysis, mathematical computations, machine learning, visualization, and many more.

    This workshop gives you a practical introduction to the basics of Python. It requires no prior experience with programming. Our goal is to show you some of the potential of Python, help you get started with programming and prepare you to take your next steps (on your own or in another course).


    core areas:
    • basic data types
    • variables
    • basic flow control
    • functions
    • basic file reading and writing
    • command line arguments
    • basic debugging

    instructors:
    • Frank Löffler
    • Philipp SchĂ€fer
  • description:
    We will give an overview on the different ways to parallelize a given task and will make you familiar with the Linux command line. In the hands-on part you will submit your first computations (jobs) to the cluster and
    hopefully enjoy their results.</p>

    Requirements:

    • FSU account (needs to be specified at the registration page)
    • no fear of linux and the command line

    instructors:
    • Frank Löffler
    • AndrĂ© Sternbeck
  • description:

    The two-day course will be held January 24 and January 31, 2023 from 8 a.m. to 12.

    This is a hands-on introduction to programming. You will learn the most important concepts of programming with practical exercises using the language R. R is a well-documented, popular, and easily accessible programming language which is especially well suited for the analysis and manipulation of research data. Built around the scientific task of data analysis you will learn how to read and access data, calculate simple statistics, index and plot the data, create functions for reoccurring tasks, as well as how to use if-else statements and loops. We will also cover best practices for writing code in R and how to export the results. No prior knowledge necessary.</p>

    We will use the integrated development environment (IDE) R Studio throughout the workshop. Please install the language and the IDE before attending the course.

    This workshop is based on the Software Carpentry lesson Programming with R.
    core areas:
    • using RStudio
    • variables
    • data types
    • indexing data
    • analysing data
    • plotting
    • choices
    • loops
    • reading and writing data
    • code documentation
    • packages
    • using R scripts in workflows

    instructors:
    • Christian KnĂŒpfer
    • Volker Schwartze
  • description:
    Die Fortbildung soll Sie in die Lage versetzen, alle in der Hochschulpraxis relevanten rechtlichen und technischen Anforderungen des Datenschutzes umzusetzen. Anhand von konkreten Beispielen und Ihren Fragen werden datenschutzrechtliche GrundsĂ€tze und GrundzĂŒge der Informationssicherheit anschaulich erlĂ€utert.
    instructors:
    • Maximilian Koop
  • description:
    Die Veranstaltung gibt Ihnen einen Einblick in die wichtigsten ArbeitsablĂ€ufe des UniversitĂ€tsarchivs Jena. Sie erfahren Wissenswertes zu den BestĂ€nden und Nutzungsmöglichkeiten des Archivs sowie den Aufbewahrungsfristen und der Archivierung von Schriftgut. Die Mitarbeitenden des UniversitĂ€tsarchivs freuen sich ĂŒber einen interessierten Austausch mit Ihnen.
  • description:
     
    Nach Monaten der Datenerhebung, Analyse und Interpretation der Daten möchten Sie Ihre Ergebnisse nun in einer Fachzeitschrift veröffentlichen? Dann ist es an der Zeit, Ihre Daten noch einmal genauer zu betrachten und darĂŒber nachzudenken, wie sie jetzt aufbereitet werden können. Oder stehen Sie gerade in den Startlöchern Ihrer Doktorarbeit oder Ihres Postdoc-Projekts und möchten sichergehen, dass Sie bei der DurchfĂŒhrung und Dokumentation Ihrer Forschung nichts ĂŒbersehen haben?
     
    GemĂ€ĂŸ den DFG Leitlinien zur Sicherung guter wissenschaftlicher Praxis sollen Ihre Ergebnisse nachvollziehbar und reproduzierbar sein. Haben Sie schon mal etwas von FAIRen Daten gehört? In Bezug auf Ihre Daten bedeutet dies, dass sie Findable (auffindbar), Accessibale (zugĂ€nglich), Interoperable (interoperabel) und Reusable (wiederverwendbar) sein sollen. Sind Sie sich bewusst, dass die Veröffentlichung Ihrer Daten in einem speziellen Datenjournal oder Repositorium Ihnen nicht nur helfen kann, diese Anforderungen zu erfĂŒllen, sondern dass Sie dadurch auch eine zusĂ€tzliche Publikation und weitere Zitierungen erhalten können?
     
    Die Veröffentlichung und Langzeitarchivierung Ihrer Daten sind nur zwei Aspekte des Forschungsdatenmanagements. Dieser Workshop soll Ihnen dabei helfen, Ihre BedĂŒrfnisse an das Datenmanagement zu ermitteln, unabhĂ€ngig davon, in welcher Phase des Projekts Sie sich befinden. Zudem soll er Ihnen eine praktische Anleitung geben, wie Sie Ihre Daten organisieren, strukturieren, beschreiben und veröffentlichen können, um die Anforderungen der guten wissenschaftlichen Praxis zu erfĂŒllen.
     
    Themen des Kurses:
    • Definition Forschungsdatenmanagement und Lebenszyklus von Forschungsdaten
    • DatenmanagementplĂ€ne
    • Dokumentation, Datenorganisation, Metadaten
    • Speicherung und Back-up
    • Archivierung
    • Veröffentlichung und Nachnutzung von Forschungsdaten
    • Rechtliche Aspekte
     
    Es handelt sich um einen Online-Kurs mit Moodle und Live-Videokonferenzen. Wir werden vor den beiden Sitzungen Selbstlernmaterialien zur VerfĂŒgung stellen. Dabei wird von den Teilnehmern erwartet, dass Sie das Material vorher anschauen und gestellte Aufgaben bearbeiten. WĂ€hrend der Live-Sitzungen wird es Übungen, Gruppenarbeiten, Diskussionen und PrĂ€sentationen geben.

    Workshoptermine: 15. und 17.02.2023
    instructors:
    • Roman Gerlach
    • Jeanin JĂŒgler

Recently finished

  • description:

    If you have ever written a paper, worked with research data or programmed your own scripts, some of the following problems may sound familiar to you: You have accidentally overwritten something and would like to get it back from an earlier version of your file(s). You find yourself looking through a bunch of older versions wondering what exactly has changed between your current version and the older ones.

    Git helps you avoid these sources of frustration. As a version control system, Git lets you easily save changes in your files to a history and thus helps documenting your work. Using that history, you can see what you changed and when you did it. You can always go back and revert your project to an earlier stage, should you have accidentally deleted text or broke some functionality in your code. Git even lets you work together with others on the same project or even on the same file at the same time.

    In this workshop, we introduce you to the fundamental features of Git. You will learn how to use Git in your daily work to keep track of changes in your documents or code. Git has been originally designed for software development, but has quickly found users beyond the software community. So if you consider yourself a non-technical person, this workshop is still for you. The Git basics are easy to learn and easy to apply.


    Requirements: For this workshop you need a working installation of Git (version 2.23 or above). Downloads and installation instructions for various operating systems can be found here: https://git-scm.com/downloads.
    Certificate: This course is part of the Certificate Course "Tools for Digital Research". In order to receive the Library Carpentry Certificate you also have to attend the other two courses.
    core areas:
    • introduction to version control
    • install and config Git (git config)
    • create a repository (git init)
    • basic Git workflow: change - stage - commit (git add, git commit)
    • inspect status (git status)
    • explore the version history (git history)
    • compare versions (git diff)
    • revert changes (git restore, git reset)
    • use a graphical user interfaces (git gui, GitLab)

    instructors:
    • Christian KnĂŒpfer
    • Philipp SchĂ€fer
  • description:

    The command line is an interactive interface to your operating system. Instead of controlling your computer by clicking and dragging with the mouse you type in commands on the so-called command line or shell. Controlling your computer by hammering at the keyboard looks really old-fashioned and uncomfortable at the first glance. But if you are working with a lot of data or are programming, using the command line is a very efficient instrument. After some training period you will not want to miss it anymore. Command line interfaces are available on essentially all operating systems, including Linux, Mac OS as well as Microsoft Windows.

    In this workshop we will concentrate on common commands within the Unix/Linux command line, which is also available on Windows.


    core areas:
    Use of the command line to
    • manage files and folders
    • start and control programs
    • search for files and within files
    • manipulate the content of files
    • create small scripts for repeating tasks

    instructors:
    • Christian KnĂŒpfer
    • Philipp SchĂ€fer
  • description:

    Are you working with data organised in spreadsheets? Do you usually spend more time on data cleansing and data quality improvements than on data analysis? And do you want a powerful tool, that is free of charge and runs on every computer, including your local PC? If your answer to these questions is YES, then you should consider registering for this hands-on workshop.

    In this hands-on workshop we will first introduce what OpenRefine is and what it can do. You will learn how to import your data into OpenRefine, how to find and correct errors in your data, how to transform data, and how to save and export your cleaned data from OpenRefine. Finally, we will point you to additonal resources.
    Participating in this workshop does not require any prior knowledge of OpenRefine.
    Installation instructions will be sent to you 1 week before the course starts.

    This workshop is based on the Library Carpentry lesson OpenRefine.
    core areas:
    • Overview of OpenRefine application
    • Data import
    • Data error correction
    • Data transformation
    • Data storage and export

       

    instructors:
    • Cora Assmann
    • Christian KnĂŒpfer
  • description:
    Methoden der Deskriptiv- und Inferenzstatistik sind das grundlegende Handwerkszeug bei der Auswertung quantitativer Daten. In dem Workshop werden wir grundlegende statistische Methoden kennenlernen und mit Hilfe des Auswertungsprogramms SPSS auch praktisch durchfĂŒhren. Dazu gehören die tabellarische und grafische Aufbereitung von Daten, die Berechnung wichtiger  Kennwerte sowie grundlegende Verfahren der Inferenzstatistik wie Signifikanztests. Die Verfahren werden dabei zunĂ€chst theoretisch vorgestellt und dann an Datenbeispielen selbst durchgefĂŒhrt.
     
    Der Kurs richtet sich an Promovierende und Postdocs, die bisher nicht oder selten mit statistischen Methoden arbeiten oder ihr Grundwissen aus dem Studium auffrischen wollen.
    instructors:
    • Christof Nachtigall
  • description:

    Entsprechend der tatsĂ€chlich gespeicherten Informationen, den grundsĂ€tzlichen Anforderungen an die Speicherung und die Abrufbarkeitsoptionen stehen verschiedene Speicherdienste am Rechenzentrum zur VerfĂŒgung.

    Diese Veranstaltung richtet sich an alle die Daten zentral im Netz der UniversitÀt speichern wollen. Im Besonderen an Forschende, Lehrende, Informationsverarbeitungsverantwortliche (IVV) sowie Mitarbeitende und SekretariatsfachkrÀft.
    core areas:
    Storage - Speichern von Daten
    • Anwendungsbereiche und Besonderheiten des Storage
    • Rahmenbedingungen fĂŒr die Nutzung
    • Ausblick: Pflege von Nutzenden / Gruppenverwaltung
    Backup - Sichern von Daten
    • Einsatzmöglichkeiten eines Backup
    • Anwendungsbereiche und Besonderheiten
    • Abgrenzung zur Archivierung von Daten
    Archiv - Aufbewahren von Daten
    • Wichtige Rahmenbedingunge
    • ModalitĂ€ten und Besonderheiten bei der Datenaufbewahrung
    • Langzeitspeicherung

    instructors:
    • Rechenzentrum der UniversitĂ€t
  • description:

    Spreadsheets, they are loved, hated and for many people indispensable. In science, they are a widely used way to organize data. However, there are many pitfalls and the uncritical handling of spreadsheets can lead to sever misunderstandings or problems, as the loss of data about more than 10,000 COVID-19 cases in the UK shows. But also without such severe consequences, spreadsheets can be a source of annoyance if files that were created by others or just in a different software are not understandable or usable without additional effort.
    In addition, good data documentation will be discussed and Colectica will be introduced as a tool. The workshop consists of theoretical and interactive part. The exercises are demonstrated in Excel, but can also be applied to other systems.
    core areas:
    • Good practice in creating spreadsheets
    • Data documentation (metadata)
    • Colectica presentation

    instructors:
    • Cora Assmann
    • Volker Schwartze

Old

show
  • description:

    3D models are digital representations of (real) objects. Although the first industries that come to mind are probably the film and gaming industries, 3D models are used in a variety of other areas of work and life.

    For example, they can be used to visualize plans of buildings or to design new products. Many products that we use in everyday life are created on the basis of such models. But 3D models are also used in the field of medicine in diagnostics or for the production of individual prostheses. However, due to the continuous development of 3D printing technologies, digital 3D models are also becoming more and more relevant in the private sector.

    Especially in science, 3D models can play an important role, e.g. in the digitization of historical objects and buildings or archaeological finds (keyword Digital Humanities), as well as in the investigation of geological or physical processes or in the visualization of objects that are otherwise difficult to capture, such as chemical structures or astronomical objects. The fields of application of 3D models are very diverse and cover (almost) all disciplines.

    The workshop is addressed to all students, teachers, researchers and all other interested persons. No special prior knowledge is required.

    This workshop is organized by the Data Literacy Project of the University of Jena in cooperation with Lichtwerkstatt Jena and Prof. Sander MĂŒnster. If you have any questions about the event, please feel free to contact us at: dataliteracy@uni-jena.de.


    core areas:
    • Goals and application areas of 3D models
    • Basics of approaches and techniques
    • Practical introduction to the 3D modeling software Blender
    • Practical introduction to 3D scanning by photogrammetry
    • 3D printing

    instructors:
    • Volker Schwartze
    • Sander MĂŒnster
    • Johannes Kretzschmar
  • description:

    Dieser Workshop soll Sie dabei unterstĂŒtzen Ihre vielfĂ€ltigen Aufgaben bestmöglich umzusetzen. FĂŒr einen umfassenden Überblick werden Ihnen neben den bekannten Diensten auch die Neuerungen zu spezifischen Themen vorgestellt. Beispielsweise bietet das URZ neben dem klassischen wissenschaftlichen Rechnen ĂŒber die Kommandozeile, zukĂŒnftig auch Web-basierte Schnittstellen zur interaktiven Nutzung der HPC-Ressourcen an. In der Diskussionsrunde am Ende können gerne weiterfĂŒhrende Fragen gestellt werden. 

    Der Kurs richtet sich im speziellen an Mitarbeitende der Friedrich-Schiller-UniversitÀt.


    core areas:
    • Das URZ im Überblick - wichtige Dienste und Leistungen
    • Wissenschaftliches Rechnen und Datenaufbewahrung
    • eLearning  - Herausforderungen effizient meistern
    • Fragen und Diskussion

    instructors:
    • Rechenzentrum der UniversitĂ€t
  • description:

    The use of digital tools is an important basis for dealing with the growing and increasingly complex data sets. The problem is not limited to science, but affects almost all areas of our society. Knowledge of how to use programming languages often enables fast and flexible approaches to solving problems when working with data.

    The summer school is aimed at all students who work predominantly with numerical data and want to learn the basics of programming with Python. It is therefore particularly suitable for students from the fields of natural, life, economic, behavioral and social sciences and medicine, but is also open to all other interested parties.

    The course first teaches basic concepts and fundamental principles of programming. After first attempts in Python, practical exercises are worked on independently. All according to the motto: "Learning by doing!"

    The summer school is organized by the Data Literacy Jena (DaLiJe) project in collaboration with the Bioinformatics Core Facility Jena.

    Registration: Friedolin.
     
    core areas:
    • Basics of programming and numerics with Python
    • Specialization in processing and visualization of numerical data

    instructors:
    • Emanuel Barth
  • description:

    In this workshop, we are going to answer these questions. We start with explaining how Docker containers work and where the lines are between the container and the host operating system. Then you are going to learn — in practical exercises — how to use Docker's command-line interface to get containers, run and manage them, and to create your own container images.

    After this workshop you will be able take advantage of Docker in your own scientific work. You will be able to run applications in a Docker container on a workstation and on a cluster and also make your scientific workflows reproducible by creating and sharing your own Docker image.


    Prerequisites:

    In order to take part in this workshop, you should have basic knowledge of the Linux command line and should be able to navigate the file system.


    core areas:
    • Docker terminology: container image, container, Dockerfile
    • Downloading container images
    • Running containers
    • Managing containers and container images
    • Creating container images
    • Running Docker containers on a HPC cluster with Singularity

    instructors:
    • Eckhard Kadasch
    • AndrĂ© Sternbeck
  • description:

    Its flexible programming interface makes easy plots easy but also allows to create very complex figures. Matplotlib is therefore an excellent tool for the everyday work of scientists, for which getting into Python is worthwhile alone.

    In this workshop, you will learn how to use Matplotlib for your scientific visualizations. We will look at the various types of plots Matplotlib can generate, how to style and annotate them, and how to export them in various formats. While doing so, we will explain the fundamental anatomy of a Matplotlib figure and give some advice on how to design plots well.

    At the end of this workshop you will not only be able to visualize your data, you will also have a tool at hand that lets you do this in a scriptable and, thus, repeatable fashion. The data changes — you can just rerun your script; no need for opening a plotting application, clicking, and manually adjusting and saving plots.


    Prerequisites:

    To take part in this workshop, you should be familiar with the basics of Python. Some experience with NumPy arrays is beneficial but not required.


    core areas:
    • available types of plots
    • anatomy of Matplotlib figures
    • object-oriented and MATLAB-style programming interface
    • plot styling and annotation
    • exporting plots

    instructors:
    • Eckhard Kadasch
    • Frank Löffler
  • description:

    One of those tools is the NumPy package. NumPy provides Python with an efficient array datatype and accompanying compute functions which together form the foundation of many of todays scientific libraries.

    In this workshop, you are going to learn how use NumPy to solve your own computing tasks. We start by discussing what makes Python slow compared to other languages and how NumPy arrays remedy the situation. We are going to look at NumPy's memory model, introduce you to the most useful functions of the package, and show how you can use NumPy for tasks from element-wise array operations, over linear algebra, to the implementatin of numerical methods.


    Prerequisites:

    To take part in this workshop, you should be familiar with the basics of Python.


    core areas:
    • performance limitations of Python
    • memory model of NumPy arrays
    • how to create and work with NumPy arrays
      • important NumPy functions
      • avoiding Python loops with array operations
    • application in linear algebra and numerical methods
    • performance considerations: temporary arrays, copies, and views

    instructors:
    • Eckhard Kadasch
    • Frank Löffler
  • description:

    In this course we look at how to write professional code in the Julia programming language. We start by covering the idiosyncrasies of Julia, continue with properly structuring a Julia project, learn how to write efficient code in Julia, and mention a few important packages as well as how to call into software written in other programming languages.

    Learners will continuously follow the instructors, programming in their own Jupyter notebooks.

    We assume that learners have experience programming in general, but experience with Julia is not required
    core areas:
    • Name Julia’s idiosyncrasies
    • Navigate Julia’s documentation
    • Choose the right data structures for efficient code
    • Call from Julia into code written in other languages
    • Find existing Julia libraries

    instructors:
    • Philipp SchĂ€fer
  • description:

    GitLab is a web application for managing Git repositories. Since it is build around to Git, it is suitable to manage any project that mostly works with plain text files, for example software source code or TeX based documents. With its built-in issue and wiki systems, it can, in certain cases, even be the right tool to for managing a project without any files.

    This course will give you a foundational understanding of GitLab’s features, so that you can make informed decisions on how to use it as a tool.

    During the whole time, learners will follow along the instructors’ demonstrations, putting what they learn immediately into practice.

    We assume basic understanding of Git and the Unix shell. Having taken recent courses on either topic is sufficient. To get the most out of the section on task automation, a very basic understanding of Docker is helpful, but not required.
    core areas:
    • Navigate GitLab
    • Create, use, and delete GitLab projects
    • Collaborate on GitLab projects
    • Automate Tasks in GitLab
    • Manage projects in GitLab
    • Document projects in GitLab wikis

    instructors:
    • Frank Löffler
    • Philipp SchĂ€fer
  • description:

    Python is one of the world's most popular programming languages, not only but also, for scientific programming. Part of its popularity comes from the fact that is rather easy to learn. But most importantly, you can use Python for a broad range of tasks, e.g. text analysis, sequence analysis, mathematical computations, machine learning, visualization, and many more.

    This workshop gives you a practical introduction to the basics of Python. It requires no prior experience with programming. Our goal is to show you some of the potential of Python, help you get started with programming and prepare you to take your next steps (on your own or in another course).


    core areas:
    • basic data types
    • variables
    • basic flow control
    • functions
    • basic file reading and writing
    • command line arguments
    • basic debugging

    instructors:
    • Eckhard Kadasch
    • Frank Löffler
  • description:

    This introduction into R includes:
    • General introduction into the environment.
    • Basics of R syntax and objects.
    • Data handling in R.
    • Basic programming in R.
    • Graphics in R.

    This workshop addresses researchers interested in R without or with few previous experiences in R. This workshop includes hands- on exercises and a homework assignment.

    Requirements:
    For this workshop please install the current versions of R (https://cran.r-project.org/) and RStudio (https://rstudio.com/products/rstudio/download/#download) before the workshop.

    Recommendations:
    A major part of this workshop will be spend working in R. In order to avoid switching between my shared screen and your computer, I would recommend to use two monitors for this workshop.

    Workshop dates:
    The workshop will consist of four afternoon sessions:
    May 23 and 24 and June 02 and 03, 2022; 1.00 p.m. – 5.00 p.m.
    instructors:
    • Jan Plötner
  • description:

    If you are interested in learning Git from scratch, please register for the first part Basic Version Control with Git: A Beginner's Workshop (see our catalogue).

    If you work on documents or code together with mutliple people, it can quickly get quite complex to keep track of everyone‘s changes. Maybe you e-mail different versions back and forth and start to loose track of the individual contributions. Or you use a shared folder on Nextcloud or Dropbox but run the risk of overwriting other peoples changes, when working on the same file simultaneously. This is where Git can help you.

    Git is not only a great tool for versioning your own projects, it also provides you a robust framework for collaborating, that is for keeping track of everyone‘s changes and for integrating them into one repository — be it code, documents, and even data. And Git scales form one, to two, to many people.

    In this workshop, you learn how to use Git's collaborative features. You will learn how to organize your work in branches, merge them together, as well as how to share your work with others using remote repositories and resolve any conflicts that may arise.


    Prerequisites: If you want to join this workshop, you should have a basic familiarity with Git on the command line. That is, you should know how to create repositories, how stage and commit files, and how to look at the version history and the state of a Git repository.

     

    You should also have a working installation of Git (version 2.23 or above). Downloads and installation instructions for various operating systems can be found here: https://git-scm.com/downloads.


    core areas:
    • working with branches (git branch)
    • clone a repository (git clone)
    • working with a remote repository (git pull, git push)
    • resolve version conflicts (git merge)
    • inspect who changed what (git blame)

    instructors:
    • Eckhard Kadasch
    • Christian KnĂŒpfer
  • description:

    If you are interested in advanced topics regarding Git, please also register for the second part Collaborative Version Control with Git: An Advanced Workshop (see our catalogue).

    If you have ever written a paper, worked with research data or programmed your own scripts, some of the following problems may sound familiar to you: You have accidentally overwritten something and would like to get it back from an earlier version of your file(s). You find yourself looking through a bunch of older versions wondering what exactly has changed between your current version and the older ones.

    Git helps you avoid these sources of frustration. As a version control system, Git lets you easily save changes in your files to a history and thus helps documenting your work. Using that history, you can see what you changed and when you did it. You can always go back and revert your project to an earlier stage, should you have accidentally deleted text or broke some functionality in your code. Git even lets you work together with others on the same project or even on the same file at the same time, but more on that in the second part of our Git workshop series.

    In this workshop, we introduce you to the fundamental features of Git. You will learn how to use Git in your daily work to keep track of changes in your documents or code. Git has been originally designed for software development, but has quickly found users beyond the software community. So if you consider yourself a non-technical person, this workshop is still for you. The Git basics are easy to learn and easy to apply.


    Requirements: For this workshop you need a working installation of Git. Downloads and installation instructions for various operating systems can be found here: https://git-scm.com/downloads.

     


    core areas:
    • introduction to version control
    • install and config Git (git config)
    • create a repository git init
    • basic Git workflow: change - stage - commit (git add, git commit)
    • inspect status (git status)
    • explore the version history (git history)
    • compare versions (git diff)
    • revert changes (git restore, git reset)
    • use a graphical user interfaces (git gui, GitLab)

    instructors:
    • Eckhard Kadasch
    • Christian KnĂŒpfer
  • description:

    The command line is an interactive interface to your operating system. Instead of controlling your computer by clicking and dragging with the mouse you type in commands on the so-called command line or shell. Controlling your computer by hammering at the keyboard looks really old-fashioned and uncomfortable at the first glance. But if you are working with a lot of data or are programming, using the command line is a very efficient instrument. After some training period you will not want to miss it anymore. Command line interfaces are available on essentially all operating systems, including Linux, Mac OS as well as Microsoft Windows.

    In this workshop we will concentrate on common commands within the Unix/Linux command line, which is also available on Windows.


    core areas:
    Use of the command line to
    • manage files and folders
    • start and control programs
    • search for files and within files
    • manipulate the content of files
    • create small scripts for repeating tasks

    instructors:
    • Frank Löffler
    • Philipp SchĂ€fer
  • description:
    Due to the increasing digitization and datafication in all fields of research, the proper management of research data becomes increasingly important. You spent months on collecting samples and measurements in the field or in the lab? You explored, analysed and interpreted this data and finally published your findings in a scientific journal? Well, then it is time to think about your data again and what to do with it now. Or are you just starting your PhD or your postdoc project and want to make sure not to overlook anything when it comes to obtaining and documenting your measurements? According to the guidelines for safeguarding good scientific practice your results should be replicable and repeatable. Are you aware of the concept of FAIR data, that is mentioned in the research data policies of many funders, institutions and journals? FAIR means that data are findable, accessible, interoperable and re-usable. To ensure this, your data should be well documented, securely stored and available for later reuse. Publishing your research data through a dedicated data journal or repository may help you on this and may also get you an additional publication and further citations. Data publishing and long-term preservation are just two aspects of research data management. This workshop shall help you in determining your data management requirements, no matter at which stage of the project you are. In addition, the course provides you with practical guidance on how to organize, structure, describe and publish your data in order to comply with good scientific practice. Topics of the course:
    • Basic definitions in research data management and the data life cycle
    • Data management plans (DMP)
    • Documentation, data organisation, metadata
    • Storage and back-up
    • Archiving
    • Publication and re-use of research data
    • Legal aspects

    instructors:
    • Cora Assmann
    • Luiz Gadelha
    • Jitendra Gaikwad
  • description:

    Topics of the course:
    • Basic definitions in research data management
    • Data management plans (DMP) and DMP tools
    • Data collection
    • Data processing and analysis
    • Data publishing and sharing
    • Data preservation
    • Data reuse and search
    • Legal aspects (privacy issues) and Licenses
    • Introduction to local and national RDM support facilities


    The course consists of two sessions. The first is on Monday, 07.03.2022 from 09:00 to 12:30, the second on Wednesday, 09.03.2022 from 09:00 to 12:30.

    After registration, you will receive a questionnaire in which you can enter your expectations and questions about the course. One week before the course starts, you will get the access information for the online event.
    instructors:
    • Cora Assmann
    • Luiz Gadelha
  • description:

    Eine Winterschule mit Schwerpunkt auf Geistes-, Rechts- und Sozialwissenschaften

    Programmieren und Schuhe binden haben eine Gemeinsamkeit: Um es zu lernen, muss man es (immer wieder) machen.

    In diesem Kurs werden Sie lernen, Schleifen zu binden. Die Variable liegt dabei in der Ein- und Ausgabe des Fadens. Sie mĂŒssen also eine Fallunterscheidung treffen und dann dem Prozessor die entsprechenden Befehle erteilen. Die Schuhe fĂŒr dieses Übungs-Programm holen Sie sich aus dem Speicher und stellen Sie anschließend wieder dahin zurĂŒck. Zum Schluss schreiben Sie die erlernten Schritte als Algorithmus auf und ĂŒbersetzen ihn in eine alltagstaugliche Sprache, die auch andere interpretieren können.

    Wenn Sie genau wissen, was die hervorgehobenen Worte hinsichtlich des Programmierens bedeuten, brauchen Sie diesen Kurs voraussichtlich nicht.

    Die Winterschule richtet sich an alle Studierenden, die mit Texten arbeiten und die Grundlagen des Programmierens mit Python kennenlernen wollen. Sie ist daher vor allem fĂŒr Studierende der geistes-, sozial- und rechtswissenschaftlichen Fachrichtungen gedacht, steht aber auch allen anderen Interessierten offen.

    ZunĂ€chst erfolgt eine allgemeine EinfĂŒhrung in Grundbegriffe der Programmierung und die Arbeitsweise eines Computers. Im nĂ€chsten Teil werden grundlegende Prinzipien der Programmierung – wie etwa Befehle, Variablen, Schleifen und Fallunterscheidungen – vermittelt und erste Gehversuche in Python unternommen. Anschließend lernen Sie, eigene Programme zur Arbeit mit Texten zu entwickeln. Der Kurs findet ĂŒberwiegend als praktische Übung in der Programmiersprache Python statt – Learning by Doing!

    Die Veranstaltung findet online ĂŒber Zoom statt. Die Zugangsdaten und weitere Informationen gehen den Teilnehmer*innen spĂ€testens am 16. Februar 2022 zu. FĂŒr die Teilnahme gibt es eine Teilnahmebescheinigung.

    Registrierung ist bis spÀtestens 15. Februar 2022 möglich.

  • description:

    Bash is an interactive interface to your operating systems. Instead of controlling your computer by clicking and dragging with your mouse, you type in commands at the so-called command line, terminal, or shell, of which Bash is the most widespread. Controlling your computer by hammering at the keyboard looks really old-fashioned and uncomfortable at first glance. But if you are working with a lot of data or writing your own computer programs, using the command line is a very efficient tool. After some training period you will not want to miss it anymore. With its root in the Unix operating system, Bash is nowadays available for Linux, macOS as well as Microsoft Windows.

    This is the advanced Bash course. We will learn how to use Bash for

    • searching for files and within files and
    • creating Bash scripts for repeating tasks.

     


    instructors:
    • Christian KnĂŒpfer
    • Philipp SchĂ€fer
  • description:

    Within this workshop we will spend only very little time on what LaTeX can do, but will instead concentrate on you actually making your first steps. This workshop alone will likely not be enough for a beginner to use LaTeX in the future without further help or reference, but it should give a good start and includes pointers where to turn for example use.


    core areas:
    • document structure
    • basic formatting
    • symbols and math
    • images and figures
    • citations

    instructors:
    • Frank Löffler
    • Christian KnĂŒpfer
  • description:
    hopefully enjoy their results.</p>

    Requirements:

    • FSU account (needs to be specified at the registration page)
    • no fear of linux and the command line

    instructors:
    • AndrĂ© Sternbeck
  • description:

    Gephi is a popular and easy-to-use open-source tool for working with networks. In this workshop we will use Gephi to create example networks from data, visualise these networks and perform various analyses on them.


    instructors:
    • Christian KnĂŒpfer
  • description:
    Forschungsdatenmanagement (FDM) umfasst alle AktivitĂ€ten im Umgang mit Forschungsdaten von der Erzeugung, Dokumentation und Aufbewahrung bis zur Publikation und Archivierung. Um die Vielzahl von Aspekten im FDM zu berĂŒcksichtigen, sollte bereits vor Projektstart ein Datenmanagementplan (DMP) erstellt werden, der den Umgang mit den im Forschungsprojekt erzeugten Daten dokumentiert und benötigte Ressourcen spezifiziert. Angemessenes Forschungsdatenmanagement und das Erstellen eines DMPs wird von immer mehr Förderorganisationen bei der Beantragung von Projekten vorausgesetzt und ist daher ein wichtiger Bestandteil der Projektplanung. Daneben hilft eine gute Planung aber auch anfallende Kosten von vornherein bei der Beantragung von Mitteln zu berĂŒcksichtigen, UnterstĂŒtzung durch entsprechende Partner sicherzustellen und erforderliche Infrastrukturen aufzubauen um einen effektiven und sicheren Umgang mit den Forschungsdaten wĂ€hrend der Projektlaufzeit sicherzustellen. 
    Die Veranstaltung gibt einen Überblick ĂŒber die Anforderungen der verschiedenen Förderorganisationen bezĂŒglich des FDMs und der Erstellung von DMPs. Außerdem werden der Aufbau und die inhaltlichen Schwerpunkte des DMPs sowie nĂŒtzliche UnterstĂŒtzungsmöglichkeiten in Form von Beratungsangeboten und Werkzeugen vorgestellt.

    Der Kurs findet in englischer Sprache statt.
    core areas:
    • Anforderungen verschiedener Förderorganisationen
    • Überblick ĂŒber Aufbau und Inhalt eines DMPs
    • NĂŒtzliche Werkzeuge und UnterstĂŒtzungsangebote

    Dozenten:
    Roman Gerlach I Kontaktstelle Forschungsdatenmanagement
    Dr. Cora Assmann I ThĂŒringer Kompetenznetzwerk Forschungsdatenmanagement
  • description:
    The Bash is an interactive interface to your operating systems. Instead of controlling your computer by clicking and dragging with the mouse you type in commands on the so-called command line or shell, and the Bash is the most wide-spread. Controlling your computer by hammering at the keyboard looks really old-fashioned and uncomfortable at the first glance. But if you are working with a lot of data or writing your own computer programs using the command line is a very efficient instrument. After some training period you will not want to miss it anymore. With its root in the Unix operating system, the bash is nowadays available for Linux, Mac OS as well as Microsoft Windows. In this workshop we will learn how to use Bash for:
    • managing files and folders,
    • starting and controlling programs,
    • searching for files and within files,
    • manipulating the content of files, and
    • creating Bash scripts for repeating tasks.

    instructors:
    • Christian KnĂŒpfer
    • Philipp SchĂ€fer
  • description:

    Code is everywhere - and scientific research is no exception to this. Whether it is in the STEM disciplines or, more recently, in the growing field of digital humanities or computational social science. Programming allows researchers to handle large amounts of digital data with ease, to automate tasks that would otherwise be time-consuming or even impossible to do, and to explore new approaches. Programming skills allow you to be more autonomous of pre-existing tools and to tailor your workflow to your own needs.

    Python is one of the world's most popular programming languages, not only but also, for scientific programming. Part of its popularity comes from the fact that is rather easy to learn. But most importantly, you can use Python for a broad range of tasks, e.g. text analysis, sequence analysis, mathematical computations, machine learning, visualization, and many more.

    This two-session workshop gives you a practical introduction to the basics of Python. It requires no prior experience with programming. Our goal is to show you some potentials of Python, help you get started with programming and prepare you to take your next steps (on your own or in another course).

    1. The course consists of two sessions. The first is on Tuesday, 04.01.2022 from 10:00 to 12:00, the second on Tuesday, 11.01.2022 from 10:00 to 12:00.
    2. If possible, please also register for the course via Indico: https://indico.rz.uni-jena.de/event/12/.

    instructors:
    • Eckhard Kadasch
    • Yannic Bracke
  • description:

    If you have ever written a paper or worked with research data, some of the following problems may sound familiar to you: You have accidentally overwritten something and would like to get it back from an earlier version of your file(s). You find yourself looking through a bunch of older versions wondering what exactly has changed between your current version and the older version. You and a colleague work on the same files and have to e-mail different versions back and forth. You and a colleague use a shared folder (e.g. Nextcloud, Dropbox) but made edits at the same time, so some of your edits are lost.

    These unpleasant situations can be avoided by using git. As a version control system, git helps you to keep track of your work and to collaborate with other people. It enables easy documentation, the saving and retrieving of earlier versions and working with others in the same directory or even on the same file at the same time.

    Git has been mostly used for software development so far - but this should not put off researchers from non-technical disciplines. The git basics are easy to learn and easy to apply. In this two-session course, you are introduced to the fundamental features of git and learn how to use it in your daily work.

  • description:

    Data security may only be a part of IT security, but even when concentrating of the security of data, the list of available tools is at least as long as the list of possible dangers.

    Within this introductory workshop we will discuss topics starting from social engineering, come to possibly good and not so good password practices, will mention just a bit of the concept of public/private key encryption, before you will learn how to use some of the more common tools using these methods, including password managers, gpg, VPN, email signatures and encryption, and last but not least, your browser.

    Due to the large zoo of tools, it is likely that we do not cover your tools of choice. However, we will focus on those aspects that are similar across different products.

  • description:

    In 1804 the should-be famous engineer Richard Trevithick invented something to connect us all – the very first steam locomotive, a train. While being a huge success at the time, little did he know that he was laying the foundation of a much bigger phenomenon: I am, of course, talking about the hype train. But while back then the train was used to connect people, this train is – and with rising COVID numbers, this aspect is probably more important than ever – all about isolation.

    In this workshop, we are going to put both of these aspects together: we will discuss the hype around Docker that appeared over the past years and why its isolation features are so important to its success.

    With practical exercises and some sprinkles of theoretical background you will develop an understanding of how Docker (and container engines in general) works, what it is used for and how you can take advantage of it. You will learn how to use the Docker tools to run and manage containers, or release your own software to the public. Then, if time permits, we will deep-dive into higher-level tools to orchestrate collections of containers across the boundaries of a single physical machine.

    So, if that sounds interesting to you, hop on the hype train, or it will leave the station without you! All aboard!

  • description:

    Are you working with data organised in spreadsheets? Do you usually spend more time on data cleansing and data quality improvements than on data analysis? And do you want a powerful tool, that is free of charge and runs on every computer, including your local PC? If your answer to these questions is YES, then you should consider registering for this hands-on workshop.

    OpenRefine is a powerful, free and open source tool to clean, correct, codify, and extend your tabular data. Using OpenRefine will save you hours of manual editing and correcting of data.

    In this hands-on workshop we will first introduce what OpenRefine is and what it can do. You will learn how to import your data into OpenRefine, how to find and correct errors in your data, how to transform data, and how to save and export your cleaned data from OpenRefine. Finally, we will point you to additional resources to continue learning after the workshop.

    Participating in this workshop does not require any prior installation nor knowledge of OpenRefine.

  • description:
    Due to the increasing digitization and datafication in all fields of research, the proper management of research data becomes increasingly important. You spent months on collecting samples and measurements in the field or in the lab? You explored, analysed and interpreted this data and finally published your findings in a scientific journal? Well, then it is time to think about your data again and what to do with it now. Or are you just starting your PhD or your postdoc project and want to make sure not to overlook anything when it comes to obtaining and documenting your measurements? According to the guidelines for safeguarding good scientific practice your results should be replicable and repeatable. Are you aware of the concept of FAIR data, that is mentioned in the research data policies of many funders, institutions and journals? FAIR means that data are findable, accessible, interoperable and re-usable. To ensure this, your data should be well documented, securely stored and available for later reuse. Publishing your research data through a dedicated data journal or repository may help you on this and may also get you an additional publication and further citations. Data publishing and long-term preservation are just two aspects of research data management. This workshop shall help you in determining your data management requirements, no matter at which stage of the project you are. In addition, the course provides you with practical guidance on how to organize, structure, describe and publish your data in order to comply with good scientific practice. Topics of the course:
    • Basic definitions in research data management and the data life cycle
    • Data management plans (DMP)
    • Documentation, data organisation, metadata
    • Storage and back-up
    • Archiving
    • Publication and re-use of research data
    • Legal aspects

    Target group: Doctoral Candidates and Postdocs from the Environmental and Earth Sciences (e.g. ecology, biology, geology, geography). This will be an online course using Moodle and live video conferences. We will provide self-study material prior to the two sessions and we expect participants to study the material beforehand and to fulfil the tasks given. During the live sessions there will be exercises, group work, discussions and some presentations.

    Course dates: 25 and 27 October, 9-13 h
    instructors:
    • Cora Assmann
    • Annett Schröter
    • Volker Schwartze
  • description:

    comic by xkcd.

    Spreadsheets, they are loved, hated and for many people indispensable. In science, they are a widely used way to organize data. However, there are many pitfalls and the uncritical handling of spreadsheets can lead to sever misunderstandings or problems, as the loss of data about more than 10,000 COVID-19 cases in the UK shows. But also without such severe consequences, spreadsheets can be a source of annoyance if files that were created by others or just in a different software are not understandable or usable without additional effort. In this workshop, we will introduce possible pitfalls as well as some good practice guidelines when creating spreadsheets.

  • description:
    Forschungsdatenmanagement (FDM) umfasst alle AktivitĂ€ten im Umgang mit Forschungsdaten von der Erzeugung, Dokumentation und Aufbewahrung bis zur Publikation und Archivierung. Um die Vielzahl von Aspekten im FDM zu berĂŒcksichtigen, sollte bereits vor Projektstart ein Datenmanagementplan (DMP) erstellt werden, der den Umgang mit den im Forschungsprojekt erzeugten Daten dokumentiert und benötigte Ressourcen spezifiziert. Angemessenes Forschungsdatenmanagement und das Erstellen eines DMPs wird von immer mehr Förderorganisationen bei der Beantragung von Projekten vorausgesetzt und ist daher ein wichtiger Bestandteil der Projektplanung. Daneben hilft eine gute Planung aber auch anfallende Kosten von vornherein bei der Beantragung von Mitteln zu berĂŒcksichtigen, UnterstĂŒtzung durch entsprechende Partner sicherzustellen und erforderliche Infrastrukturen aufzubauen um einen effektiven und sicheren Umgang mit den Forschungsdaten wĂ€hrend der Projektlaufzeit sicherzustellen. 

    Die Veranstaltung gibt einen Überblick ĂŒber die Anforderungen der verschiedenen Förderorganisationen bezĂŒglich des FDMs und der Erstellung von DMPs. Außerdem werden der Aufbau und die inhaltlichen Schwerpunkte des DMPs sowie nĂŒtzliche UnterstĂŒtzungsmöglichkeiten in Form von Beratungsangeboten und Werkzeugen vorgestellt.
     
    core areas:
    • Anforderungen verschiedener Förderorganisationen
    • Überblick ĂŒber Aufbau und Inhalt eines DMPs
    • NĂŒtzliche Werkzeuge und UnterstĂŒtzungsangebote

    Kursleitung:
    Herr Benjamin Sippel

    Dozent:
    Roman Gerlach | Kontaktstelle Foschungsdatenmanagement