zedif: Courses


We offer training on digital topics within research. You can either choose from our course list below or request special training. See this overview of our course topics for inspiration.

Besides courses we offer, which are marked with our logo, this list also contains courses to similar topics but by other providers and we keep it up to date as best we can. Offers that are not quite workshops but more consulting can be found here.

Current & Upcoming

  • description:
    In this course, participants learn the functions of MATLAB for the automated analysis and visualization of data. The basics of the general structure of the software, properties of data types, executing commands, programming statements and loops, creating functions, statistical evaluation of data, generating informative graphics and saving results are taught and put into practice using a sample data set. Course dates: April 19, April 26 and May 3, 2024; 9.oo a.m. - 5.oo p.m.
    instructors:
    • Andy Schumann
  • description:

    While HPC clusters are composed of components similar to those found in PCs or workstations, they are used in a very different way. This is mainly owed to the fact that they consist of many computers networked together and that they are shared by multiple users.

    We start this workshop with explaining to you the inner structure of a typical HPC cluster and highlight the differences to a workstation. You will then learn how to use the Slurm workload manager, which is used on the university cluster “Draco” to distribute compute jobs across the hardware. We will also explore the various types of batch jobs and interactive tasks. During the hands-on sessions, you will submit your first compute jobs to the cluster and hopefully enjoy their results. Finally, we will provide guidance on how to install and use your own software.

    This workshop is held in person, online particiaption is not possible. The course is taught in English.


    Requirements:

    To participate, you need

    • a user account of the Univeristy of Jena (please enter during registration so we can activate your account for Draco), and
    • basic familiarity with Linux and using the command line or the curiosity to explore it.

    To familiarize yourself with the Linux command line, you may also join our workshop “Introduction to the Command Line”.


    core areas:
    • Overview of local HPC resources
    • Structure of HPC systems
    • Usage of a HPC system for numerical intensive applications
    • Interactive use

    instructors:
    • Eckhard Kadasch
    • André Sternbeck
  • description:

    We will start with a short introduction to the basics of parallel computing and then learn to analyze scaling behavior and performance bottlenecks based on existing applications. The aim will be to optimize job scripts for yout own parallel applications.

    For the practical parts, we will use the university cluster “Draco”. If you do not yet have access to the cluster, please apply for it via our Service-Desk. Please provide your alphanumeric university login.

    This workshop is held in person, online particiaption is not possible. The course is taught in English.


    Prerequisites:
    • Basic knowledge of Linux, the command line (e.g. Bash) and SSH
    • Experience in at least one high-level programming language such as Python or Fortran
    • Initial experience with a workload manager such as Slurm
    • User account of the University of Jena, which must be specified on the registration page

    core areas:
    • Parallel computer architectures and programming models
    • Parallel scaling behavior and performance bottlenecks
    • Optimization of Slurm job scripts

    instructors:
    • Eckhard Kadasch
    • André Sternbeck
  • description:

    One of those tools is the NumPy package. NumPy provides Python with an efficient array datatype and accompanying compute functions which together form the foundation of many of todays scientific libraries.

    In this workshop, you are going to learn how use NumPy to solve your own computing tasks. We start by discussing what makes Python slow compared to other languages and how NumPy arrays remedy the situation. We are going to look at NumPy’s memory model, introduce you to the most useful functions of the package, and show how you can use NumPy for tasks from element-wise array operations, over linear algebra, to the implementatin of numerical methods.

    To foster an interactive atmosphere among participants and instructors, this workshop is offered in person and not as a hybrid course.

    The course language is English.


    Prerequisites:

    To take part in this workshop, you should be familiar with the basics of Python.

    We encourage you to bring your own laptop. All you need is a working Python installation with JupyterLab or Jupyter Notebook installed.


    core areas:
    • performance limitations of Python
    • memory model of NumPy arrays
    • how to create and work with NumPy arrays
      • important NumPy functions
      • avoiding Python loops with array operations
    • application in linear algebra and numerical methods
    • performance considerations: temporary arrays, copies, and views

    instructors:
    • Eckhard Kadasch
    • Frank Löffler
  • description:
    The standard language to work with these databases is the Structured Query Language (SQL).

    In this course we will look at how to write queries to relational databases in SQL. We will start simple and move towards more complex queries, covering the following topics:
    • filtering
    • sorting
    • aggregating
    • joining (data from mutliple tables)

    During the course you will type and click along, following the instructors, preferably on your own machine. Please reach out to us, if you need help installing DB Browser for SQLite (https://sqlitebrowser.org/dl/). If necessary, you can also use the on-site computers that have DB Browser for SQLite preinstalled.
    core areas:
    • Writing SQL queries
    • Filtering
    • Sorting
    • Aggregating
    • Joining

    instructors:
    • Volker Schwartze
    • Philipp Schäfer
  • description:
    Whatever stage of your project you are at, this workshop will help you identify your data management needs. It will give you guidance on how to organize, structure, describe and publish your data.Due to the increasing digitization and datafication in all fields of research, the proper management of research data becomes increasingly important.You spent months on collecting samples and measurements in the field or in the lab? You explored, analyzed, and interpreted this data and finally published your findings in a scientific journal? Well, then it is time to think about your data again and what to do with it now. Or are you just starting your PhD or your postdoc project and want to make sure not to overlook anything when it comes to obtaining and documenting your measurements?According to the guidelines for safeguarding good scientific practice your results should be replicable and repeatable. Are you aware of the concept of FAIR data, that is mentioned in the research data policies of many funders, institutions, and journals? FAIR means that data are findable, accessible, interoperable, and re-usable. To ensure this, your data should be well documented, securely stored and available for later reuse. Publishing your research data through a dedicated data journal or repository may help you on this and may also get you an additional publication and further citations.A few days before the course starts, you will be given access to the preparation material (Moodle). It is recommended that you work through the material beforehand as it will be referred to in the course.Topics:
    • Basic definitions in research data management and the data life cycle
    • Data management plans (DMP)
    • Documentation, data organization, metadata
    • Storage and back-up
    • Archiving
    • Publication and re-use of research data
    • Legal aspects
     Course dates: May 28 and May 31, 9-13 h Content focus
    • Introduction to research data management and the data-life-cycle concept
    • Preparing research data for re-use (data structure, data quality, metadata)
    • Opportunities and requirements in data publication and long-term data archiving
     
    instructors:
    • Cora Assmann
    • Roman Gerlach
  • description:

    In this course we look at how to develop software using the Julia programming language. We cover the idiosyncrasies of Julia as a programming language, learn how a Julia project is typically structured, look at package management, mention a few important packages, and look at how to call software written in other programming languages from Julia.

    The course language is English.


    Requirements:

    During the course you will type along, following the instructors, preferably on your own device. Therefore you need a working installation of Julia. Please reach out to us, if you need help installing Julia. If necessary, you can use computers provided at the workshop location, where all necessary software is already installed.

    We assume that you have experience programming in general, but experience with Julia is not required.


    core areas:
    • Name Julia’s idiosyncrasies
    • Navigate Julia’s documentation
    • Call from Julia into code written in other languages
    • Find existing Julia libraries

    instructors:
    • Frank Löffler
    • Philipp Schäfer
  • description:

    Participants will learn the basics of R syntax, data structures and control structures, and how to read and write data in R. In addition, participants will be able to choose an advanced example to study in more detail during the course. By the end of the course, students should have a solid foundation in R programming and be able to write simple scripts to manipulate and analyze data.


    Requirements:

    No prior knowledge of (statistical) programming is required for this course.

    During the course you will type along, following the instructors, preferably on your own machine. For this you need a working installation of R as well as RStudio. Downloads and installation instructions for various operating systems can be found here. If necessary, you can use computers provided at the workshop location, where all needed software is already installed. However working in your usual environment is preferred.


    core areas:
    • Basics of R
    • Data Processing
    • Statistical Analysis
    • Data Visualization
    • R Markdown

    instructors:
    • Martin Kerntopf
    • Christian Knüpfer
  • description:

    Network Analysis is a powerful method to study and visualise the relationships between interconnected entities. In various fields such as social sciences, biology, transportation and computer science, Network Analysis provides valuable insights into the structure, the behaviour and dynamics of complex systems. In this course we use R to digitally represent, analyse and visualise networks. R is a flexible and widely used programming language for statistical calculations and data analysis, which offers a variety of packages and tolls for Network Analysis.


    Requirements:

    Basic prior knowledge of (statistical) programming with R is an advantage for this course.

    During the course you will type along, following the instructors, preferably on your own machine. For this you need a working installation of R as well as RStudio. Downloads and installation instructions for various operating systems can be found here. If necessary, you can use computers provided at the workshop location, where all needed software is already installed, however working in your usual environment is preferred.


    core areas:
    • basics of Network Analysis
    • R packages for Network Analysis
    • representation of networks as data structures in R
    • creation of networks from data
    • analysis of networks
    • centrality measures
    • visualisation of networks

    instructors:
    • Martin Kerntopf
    • Christian Knüpfer

Recently finished

  • description:

    This course will give you a foundational understanding of GitLab’s features. Its core functionality is collaborative and versioned management of projects that mostly work with plain text files, for example software source code or TeX based documents. Every change is recorded with information on authorship and a timestamp. With its built-in issue tracker and wiki it can even be the right tool for managing a project without any files.

    With lessons learned in this course you can make informed decisions on how to use it as a tool.

    During this course, learners will follow along the instructors’ demonstration, using GitLab and Git, putting what they learn immediately into practice.

    Basic experience with Git is a requirement for this course. Alternating with this variant of the course, we offer a variant that lacks this requirement in winter semesters.

    The course is taught in English.


    core areas:
    • Navigate GitLab
    • Create, use, and delete GitLab projects
    • Collaborate on GitLab projects
    • Automate Tasks in GitLab
    • Manage projects in GitLab
    • Document projects in GitLab wikis

    instructors:
    • Frank Löffler
    • Philipp Schäfer
  • description:

    Research Data Management (FDM) comprises all activities in the handling of research data from generation, documentation and storage to publication and archiving. In order to take into account the multitude of aspects in FDM, a data management plan (DMP) should be drawn up before the project starts. This plan should document the handling of the data generated in the research project and specify the resources required. Appropriate research data management and the creation of a DMP is a prerequisite for more and more funding organizations when applying for projects and is therefore an important part of project planning. In addition, good planning also helps to take costs into account from the outset when applying for funding, to ensure support from appropriate partners and to establish the necessary infrastructure to ensure effective and secure handling of research data during the project.  
    The workshop will give an overview of the requirements of the different funding organizations regarding FDM and the creation of DMPs. In addition, the structure and content of the DMPs as well as useful support options in the form of consulting services and tools will be presented. In addition, participants get the opportunity to practice drafting texts for DMPs during exercise sessions.
     
    core areas:
    • Requirements of different funding institutions
    • Structure and content of a Data Management Plan
    • Useful tools and servicesExercises to draft DMPs

    instructors:
    • Benjamin Sippel
    • Cora Assmann
    • Roman Gerlach
  • description:

    If you work on documents or code together with multiple people, it can quickly get quite complex to keep track of everyone‘s changes. Maybe you e-mail different versions back and forth and start to loose track of the individual contributions. Or you use a shared folder on Nextcloud or Dropbox, but run the risk of overwriting other peoples changes when working on the same file simultaneously. This is where Git can help you.

    Git is not only a great tool for versioning your own projects, it also provides you useful features for collaborating. Git helps you to keep track of everyone‘s changes and to integrate them into one repository — be it code, documents, or even data. And Git scales form one, to two, to many people, including whole companies with thousands of developers.

    In this workshop, you will learn how to organize your work in branches, merge them together, share your work with others using remote repositories, and resolve any conflicts that may arise.


    Requirements:

    If you want to join this workshop, you should have a basic familiarity with Git on the command line. That is, you should know how to create repositories, how stage and commit changes, and how to look at the version history as well as the state of a Git repository.

    We teach these Git basics in our course Basic Version Control with Git: A Beginner's Workshop once a semester (see our course catalog).

    During the course you will type along, following the instructors, preferably on your own machine. For this you need a working installation of Git (version 2.23 or above). Downloads and installation instructions for various operating systems can be found here: https://git-scm.com/downloads. If necessary, you can use computers provided at the workshop location, where all needed software is already installed, however working in your usual environment is preferred.


    core areas:
    • working with branches (git branch)
    • clone a repository (git clone)
    • working with a remote repository (git pull, git push)
    • resolve version conflicts (git merge)
    • inspect who changed what (git blame)

    instructors:
    • Christian Knüpfer
    • Philipp Schäfer
  • description:

    A command line is a fundamental, interactive interface to a computer's operating system. Bash is a very widely used command line interface and is available for most operating systems. Together with a collection of auxiliary programs (including the GNU Core Utilities), it can be used to conveniently process many tasks. Bash shows its strength particularly in the automation of recurring tasks and the processing of a large number of files.

    Using examples, we will work through typical problems and show how processing steps can be combined to create complex workflows.


    Prerequisites:

    For this course, you should already have basic knowledge of Bash. You can acquire this in our course "Introduction to the Command Line", which we offer regularly. The basis for this course is the material from the Software Carpentry course The Unix Shell.

    During the course you will type along, following the instructors, preferably on your own machine. For this you need a working installation of Bash. Bash should already be installed on all machines with Linux and macOS as operating system. To get Bash on Windows, you can install Git (https://git-scm.com/downloads), which contains GitBash. If you already installed the Windows Subsystem for Linux (https://learn.microsoft.com/en-us/windows/wsl/about), you also already have Bash installed. Alternatively, you can use computers provided at the workshop location, where all needed software is already installed. However, we strongly prefer you using your own machine and with that your usual environment.


    core areas:
    • search and replace character patterns
    • management of processes
    • variables and functions
    • subshells and binding environments
    • expansion and command substitution

    instructors:
    • Christian Knüpfer
    • Frank Löffler
  • description:

    Code is everywhere - and scientific research is no exception to this. Programming allows researchers to handle large amounts of digital data with ease, to automate tasks that would otherwise be time-consuming or even impossible to do, and to explore new approaches. Programming skills allow you to be more autonomous of pre-existing tools and to tailor your workflow to your own needs.

    Python is one of the world’s most popular programming languages, not only but also, for scientific programming. Part of its popularity comes from the fact that is rather easy to learn. But most importantly, you can use Python for a broad range of tasks, e.g. text analysis, sequence analysis, mathematical computations, machine learning, visualization, and many more.

    This workshop gives you a practical introduction to the basics of Python. It requires no prior experience with programming. Our goal is to show you some of the potential of Python, help you get started with programming and prepare you to take your next steps (on your own or in another course).

    To foster an interactive atmosphere among participants and instructors, this workshop is offered in person and not as a hybrid course.

    The course language is English.


    Prerequisites:

    No prior experience with programming is required.

    We encourage you to bring your own laptop. All you need is a working Python environment with the development environment JupyterLab or Jupyter Notebook installed. We recommend installing the Anacoda Python distribution as described here, which comes with all packages needed in this workshop.

    Alternatively, you can use one of the computers in the PC pool.


    Certificate:

    This course is part of our Software Carpentry Workshop. In order to receive the Software Carpentry Certificate you also have to attend the other two courses.


    core areas:
    • variables and assignments
    • basic data types
    • basic flow control
    • working with tabular data (Pandas package)
    • plotting data
    • writing and using functions

    instructors:
    • Eckhard Kadasch
    • Volker Schwartze
  • description:

    If you have ever written a paper, worked with research data, or programmed your own scripts, the following problems may sound familiar to you: You have accidentally overwritten something and would like to get it back from an earlier version of your file(s). You find yourself looking through a bunch of older versions, wondering what exactly has changed between your current version and the older ones.

    Git helps you avoid these sources of frustration. As a version control system, Git lets you save changes in your files to a history and thus helps documenting your work. Using that history, you can see later who changed what when, and ideally also why. You can also go back and revert your project to an earlier stage, should you have accidentally deleted something or broke some functionality in your code. Git even lets you work together with others on the same project or even on the same file at the same time.

    In this workshop, we introduce you to the fundamental features of Git. You will learn how to use Git in your daily work to keep track of changes in your documents or code. Git has originally been designed for software development, but has quickly found users beyond the software community. If you consider yourself a non-technical person, this workshop is still for you.

    The course is taught in English.


    Requirements:

    During the course you will type along, following the instructors, preferably on your own machine. For this you need a working installation of Git (version 2.23 or above). Downloads and installation instructions for various operating systems can be found here: https://git-scm.com/downloads. If necessary, you can use computers provided at the workshop location, where all needed software is already installed, however working in your usual environment is preferred.

    No special previous knowledge is required for the course. You should only be familiar with basic concepts of file systems, files and folders.


    Certificate:

    This course is part of our Software Carpentry Workshop. In order to receive the Software Carpentry Certificate you also have to attend the other two courses.


    core areas:
    • introduction to version control
    • install and config Git (git config)
    • create a repository (git init)
    • basic Git workflow: change - stage - commit (git add, git commit)
    • inspect status (git status)
    • explore the version history (git history)
    • compare versions (git diff)
    • revert changes (git restore, git reset)
    • use a graphical user interfaces (git gui, GitLab)

    instructors:
    • Eckhard Kadasch
    • Frank Löffler
  • description:

    We will start by getting familiar with the command line and in particular Bash, a command-line shell and programming language. Knowing how to use Bash offers us access to various small programs that, when put together, can help automate tasks related to working with files and programs that can be accessed through a command line interface.

    Then we will get to know Git, a version control system. That means, we learn to track changes in source code: Who changed what, when, and for what reason. This can, for example, help track down bugs.

    Finally, we will learn to program in Python. Beginning with foundational concepts of the language, we will work toward writing our first Python script.

    The course language is English.


    Requirements:

    During the course you will type along, following the instructors, preferably on your own machine. Please reach out to us, if you need help with installing the required software. You need Git (https://git-scm.com/downloads), Python and the JupyterLab programming environment — we recommend using the Anaconda Python distribution — and Bash. Bash should come either preinstalled with your operating system (macOS and Linux) or comes with the Git installation (Windows). If necessary, you can also use the on-site computers that have the required software preinstalled.

    No special previous knowledge is required for the course. You should only be familiar with basic concepts of file systems: files and folders.


    Certificate:

    In order to receive the Software Carpentry Certificate you have to attend on all four days. But you can also attend only one or two of the courses that make this workshop:


    core areas:
    * Command Line with Bash
    * Version Control with Git
    * Programming with Python
    instructors:
    • Eckhard Kadasch
    • Frank Löffler
    • Volker Schwartze

Old

show
  • description:

    The command line is an interactive interface to your computer. Instead of controlling it by clicking you type in commands. At first glance, this might seem old-fashioned and uncomfortable. But if you are working with many files at the same time or are programming, using the command line is a very efficient tool.

    Searching through files in a directory and its subdirectories for a word or another sequence of characters as well as finding all files that have been modified in a certain period are examples of the command line’s strength. In addition, many scripts and applications written by other researchers can only be used from the command line.

    In this workshop we will teach the most common commands of Bash, a command-line interface (or shell) first created for Unix, but now the most widespread and available for all major Desktop operating systems: macOS, Microsoft Windows, and Linux.

    The course language is English.


    Requirements:

    During the course you will type along, following the instructors, preferably on your own machine. For this you need a working installation of Bash. Bash should already be installed on all machines with Linux and macOS as the operating system. To get Bash on Windows, you can install Git (https://git-scm.com/downloads), which contains GitBash. If you already installed the Windows Subsystem for Linux (https://learn.microsoft.com/en-us/windows/wsl/about), you also already have Bash installed. Alternatively, you can use computers provided at the workshop location, where all needed software is already installed. But we strongly prefer you using your own machine and with that your usual environment.

    No special previous knowledge is required for the course. You should only be familiar with basic concepts of file systems, files and folders.


    Certificate:

    This course is part of our Software Carpentry Workshop. In order to receive the Software Carpentry Certificate you also have to attend the other two courses.


    core areas:
    Use of the command line to
    • manage files and folders
    • start and control programs
    • search for files and within files
    • manipulate the content of files
    • create small scripts for repeating tasks

    instructors:
    • Eckhard Kadasch
    • Frank Löffler
  • description:
    This short introduction to R is tailored for researchers with zero experience in R or those who seek a refresher. In this hands-on crash course, I will provide you with the fundamental knowledge needed to get started working with R for data analysis. Using a practical example dataset, you will learn how to import, clean, process, and analyse data, as well as visualize the results. The workshop emphasizes executing specific tasks in R without overwhelming you with intricate details. We'll cover the basics of R syntax and working with objects, and explore data handling and graphics. If you're seeking a quick way to dive into R and want to focus on applying R to your data right away, this workshop is for you. However, if you want a more comprehensive introduction to R, including more advanced topics such as writing custom functions for efficient data analysis, see my 2-day workshop [Link: Introducing R as a flexible tool for data analysis].  
    instructors:
    • Jan Plötner
  • description:

    In this Coffee Lecture, we introduce the SQLite Database browser and demonstrate what kind of analyses its capable of. In addition, we also put it into the context of other common spreadsheet programs, such as LibreOffice Calc or Microsoft Excel.


    instructors:
    • Philipp Matthias Schäfer (Uni Jena)
  • description:

    In recent years, the specific requirements of funding institutions (e.g. DFG, EU) in the field of research data management (RDM) have increased. Principal investigators are faced with the challenge of not only designing innovative research projects, but also ensuring that they meet the specific RDM requirements.

    This workshop is specifically designed for Principal Investigators of research projects (professors, junior-professors, postdocs). It offers an introduction to the key aspects of RDM, from application to implementation, focusing on the specific requirements of funding institutions. Furthermore, the workshop will give an insight into the resources and support structures for research data management at the Friedrich Schiller University.
    core areas:
    • Requirements of funding bodies for research data management
    • Resources and support structures for research data management at Friedrich Schiller University
    • Q&A-session on RDM for PIs

    instructors:
    • Benjamin Sippel
    • Cora Assmann
    • Roman Gerlach
  • description:

    A large part of the data we use on a daily basis, is not meant to be shared unconditionally. Encrypting files and folders is a simple way to restrict access to data - whether it is passwords or sensitive business information. In an educational or university context this might be project accounting records to be stored in the cloud, on a USB stick or an external hard drive, or the need to adequately protect personal data collected in one’s own research from unauthorized access. There are tools making encryption suitable for everyday use and much easier than is often assumed which we will introduce in this talk.


    instructors:
    • Stefan Kirsch (EAH Jena)
  • description:

    Data is the new oil — you hear this phrase a lot these days. However, similar to oil, data is only valuable if it is used wisely. If data is properly analyzed, evaluated and then interpreted to gain useful insights, then this is the case. Today, machine learning (ML) methods are widely used for this purpose — methods that go beyond statistical evaluations. This workshop will first provide a descriptive theoretical introduction to the basics of machine learning and give you an overview of the most commonly used methods. Using sample data sets, we will show first how data can be analyzed statistically and visually and which types of machine learning methods can be applied to it in order to select, train and apply suitable ML models to new data.

    To bring this closer to the participants, we use the no-code software Orange Data Mining in the workshop. The package allows us to implement all steps of a machine-leaning workflow without using a programming language — from building, to training, to applying a model.

    To allow direct exchange between participants and instructors, we offer this workshop in face-to-face format rather than as a hybrid course.

    The course is taught in English.


    Prerequisites:

    Prior experience is not necessary. You may use your own laptop or one of the pool PCs.


    core areas:
    • Data analysis and data representations
    • Types, methods and models of machine learning
    • Model training
    • Model validation

    instructors:
    • Eckhard Kadasch
    • Oliver Mothes
  • description:

    We will help you to answer questions similar to the following:

    • Which License should I use to publish my data?
    • Is that software library compatible with the one my grant requires me to use?
    • Are there different licenses for data and for software?

    We will start with an overview of software and data licenses, their properties, and how they affect especially our scientific work with software and data. There will be exercises on realistic examples. Finally, concrete cases of participants (you) can be discussed.

    The course is taught in English.


    Requirements:

    No special previous knowledge is required for the course.


    core areas:
    • data licenses
    • software licenses

    instructors:
    • Cora Assmann
    • Frank Löffler
    • Philipp Schäfer
  • description:

    In this Coffee Lecture episode, we will take a look at a small selection of software that helps to ensure that backups are actually created and are at hand in case they should be needed.


    instructors:
    • Stefan Kirsch (EAH Jena)
  • description:

    Many software products can be used to write texts. Popular word processors, like LibreOffice or Microsoft Word, pose challenges when used in specific scientific contexts. LaTeX is designed to circumvent these challenges. Which advantages of LaTeX are most important for you depends on your field of research, other tools you use and, last but not least, personal preferences, but they may include: LaTeX plays well with version control systems like Git, LaTex documents typically stay small, LaTeX can easily be edited not only by humans, but also by software, LaTeX can set mathematical expressions beautifully, and LaTeX may already be the standard in your field of research.

    After this workshop you appreciate the differences between word processors like LibreOffice or Microsoft Word and systems like LaTeX. Being a hands-on workshop, the main focus will lie on you making your own first steps using LaTeX, if possible on your own computer. This workshop alone will likely not be enough for a beginner to use LaTeX in the future without further help or reference at all, but aims to provide a quick start for creating simple documents and a solid ground to write scientific publications in collaboration with more advanced users.


    Requirements:

    If you bring your own computer, have LaTeX already installed, as this can take quite a long time. If necessary, you can use computers provided at the workshop location, where all needed software is already installed.


    LaTeX:

    How you best install LaTeX depends a lot on the details of your operating system. Installation instructions for different operating systems can be found here. If in doubt, prefer TeX Live. If on Linux, prefer the packages prepared by your distribution and not the installer directly from Tex Live.


    An editor:

    LaTeX uses plain text files, so any plain text editor will do. Some editors do have additional features for LaTeX files and others have been especially made for LaTeX. If you already have a prefered plain text editor, we recommend sticking with it. For the case you do not, we would recommend to also install TeXStudio for this course.

    No special previous knowledge is required for the course. You should only be familiar with basic concepts of file systems, files and folders.


    core areas:
    • document structure
    • basic formatting
    • symbols and math
    • images and figures
    • citations

    instructors:
    • Frank Löffler
    • Philipp Schäfer
  • description:
    This workshop is designed to introduce R as a flexible tool for data analysis. It is suitable for researchers who have little or no prior experience with R. Participants will learn the basic steps of data analysis, including data importing, cleaning, processing, analysis, and visualization. Additionally, the workshop covers basic programming in R and teaches participants how to write their own functions and efficiently work with many variables and/or subsets of their data. This workshop not only focuses on teaching you the basics but also aims to give you an understanding of R as a programming language. This workshop is intended to provide you with a comprehensive introduction to R. If you are seeking a more hands-on approach focused on the fundamentals of R, see my 1-day workshop (https://qualifizierung.uni-jena.de/pages/coursedescription.jsf?courseId=62454320&catalogId=53125411).

    Workshop dates: 8. and 15.12.2023, 9.oo a.m. - 5.oo p.m.
    instructors:
    • Jan Plötner
  • description:
     
    Nach Monaten der Datenerhebung, Analyse und Interpretation der Daten möchten Sie Ihre Ergebnisse nun in einer Fachzeitschrift veröffentlichen? Dann ist es an der Zeit, Ihre Daten noch einmal genauer zu betrachten und darüber nachzudenken, wie sie jetzt aufbereitet werden können. Oder stehen Sie gerade in den Startlöchern Ihrer Doktorarbeit oder Ihres Postdoc-Projekts und möchten sichergehen, dass Sie bei der Durchführung und Dokumentation Ihrer Forschung nichts übersehen haben?
     
    Gemäß den DFG Leitlinien zur Sicherung guter wissenschaftlicher Praxis sollen Ihre Ergebnisse nachvollziehbar und reproduzierbar sein. Haben Sie schon mal etwas von FAIRen Daten gehört? In Bezug auf Ihre Daten bedeutet dies, dass sie Findable (auffindbar), Accessibale (zugänglich), Interoperable (interoperabel) und Reusable (wiederverwendbar) sein sollen. Sind Sie sich bewusst, dass die Veröffentlichung Ihrer Daten in einem speziellen Datenjournal oder Repositorium Ihnen nicht nur helfen kann, diese Anforderungen zu erfüllen, sondern dass Sie dadurch auch eine zusätzliche Publikation und weitere Zitierungen erhalten können?
     
    Die Veröffentlichung und Langzeitarchivierung Ihrer Daten sind nur zwei Aspekte des Forschungsdatenmanagements. Dieser Workshop soll Ihnen dabei helfen, Ihre Bedürfnisse an das Datenmanagement zu ermitteln, unabhängig davon, in welcher Phase des Projekts Sie sich befinden. Zudem soll er Ihnen eine praktische Anleitung geben, wie Sie Ihre Daten organisieren, strukturieren, beschreiben und veröffentlichen können, um die Anforderungen der guten wissenschaftlichen Praxis zu erfüllen.
     
    Themen des Kurses:
    • Definition Forschungsdatenmanagement und Lebenszyklus von Forschungsdaten
    • Datenmanagementpläne
    • Dokumentation, Datenorganisation, Metadaten
    • Speicherung und Back-up
    • Archivierung
    • Veröffentlichung und Nachnutzung von Forschungsdaten
    • Rechtliche Aspekte

    Der Kurs findet am 6. und 8. Dezember statt. Wir werden vor den beiden Sitzungen Selbstlernmaterialien zur Verfügung stellen. Dabei wird von den Teilnehmenden erwartet, dass Sie das Material vorher anschauen und gestellte Aufgaben bearbeiten. Während der Veranstaltung wird es Übungen, Gruppenarbeiten, Diskussionen und Präsentationen geben.
    instructors:
    • Roman Gerlach
    • Jeanin Jügler
  • description:

    While HPC clusters are composed of components similar to those found in PCs or workstations, they are used in a very different way. This is mainly owed to the fact that they consist of many computers networked together and that they are shared by multiple users.

    We start this workshop with explaining to you the inner structure of a typical HPC cluster and highlight the differences to a workstation. You will then learn how to use the Slurm workload manager, which is used on the university cluster “Draco” to distribute compute jobs across the hardware. We will also explore the various types of batch jobs and interactive tasks. During the hands-on sessions, you will submit your first compute jobs to the cluster and hopefully enjoy their results. Finally, we will provide guidance on how to install and use your own parallel software.

    This workshop is held in person, online particiaption is not possible. The course language is English.


    Requirements:

    To participate, you need a user account of the Univeristy of Jena, which needs to be entered during registration. You should also have basic familiarity with Linux and using the command or the curiosity to explore it.

    You may use your own laptop or one of the pool PCs.


    core areas:
    • Structure of HPC systems
    • Overview of local HPC resources
    • Usage of local HPC resources
    • Parallelisation concepts

    instructors:
    • Eckhard Kadasch
    • André Sternbeck
  • description:

    Whether digital microscopy images or videos created by underwater robots - image data are an important tool for researching and monitoring habitats in numerous research fields. However, the evaluation of the data is laborious and time-consuming. BIIGLE is a web-based software for efficient manual analysis of image and video data. In addition to numerous tools for manual analysis, it also offers machine learning methods for automated support in this task.


    instructors:
    • Martin Zurowietz (Uni Bielefeld)
  • description:

    Matplotlib is a comprehensive library for visualizing data and producing high-quality plots, which quickly became the standard for two-dimensional visualizations in the Python world. Its flexible programming interface makes easy plots easy but also allows to create very complex figures. Matplotlib is therefore an excellent tool for the everyday work of scientists, especially if one of their programming languages of choice is Python already.

    In this workshop, you will learn how to use Matplotlib for your scientific visualizations. We will look at the various types of plots Matplotlib can generate, how to style and annotate them, and how to export them in various formats. While doing so, we will explain the fundamental anatomy of a Matplotlib figure and give some advice on how to design plots well.

    At the end of this workshop you will not only be able to visualize your data, you will also have a tool at hand that lets you do this in a scriptable and, thus, repeatable fashion.

    Due to the highly interactive nature of this course, it can only be offered on a face-to-face basis and not as a hybrid course.

    The course language is English.


    Prerequisites:

    To take part in this workshop, you should be familiar with the basics of Python. Some experience with Numpy arrays is beneficial but not required.


    core areas:
    • available types of plots
    • anatomy of Matplotlib figures
    • object-oriented and MATLAB-style programming interface
    • plot styling and annotation
    • exporting plots

    instructors:
    • Frank Löffler
    • Volker Schwartze
  • description:

    This course will give you a foundational understanding of GitLab’s features. Its core functionality is collaborative and versioned management of projects that mostly work with plain text files, for example software source code or TeX based documents. Every change is recorded with information on authorship and a timestamp. With its built-in issue tracker and wiki, it can even be the right tool for managing a project without any files.

    Based on the what you learn in this course, you can make informed decisions on how to use it as a tool.

    During this course, learners will follow along the instructors’ demonstration, putting what they learn immediately into practice.

    Even though GitLab was developed for managing Git repositories, it is not necessary to have previous experience with Git.

    The course is taught in English.


    core areas:
    • Navigate GitLab
    • Create, use, and delete GitLab projects
    • Collaborate on GitLab projects
    • Automate Tasks in GitLab
    • Manage projects in GitLab
    • Document projects in GitLab wikis

    instructors:
    • Philipp Schäfer
    • André Sternbeck
  • description:

    Code is everywhere - and scientific research is no exception to this. Programming allows researchers to handle large amounts of digital data with ease, to automate tasks that would otherwise be time-consuming or even impossible to do, and to explore new approaches. The programming knowledge that you develop in this workshop will allow you to be more independent from dedicated software packages and to tailor your workflow to your own needs.

    In this workshop, we use Python, one of the world's most popular programming languages — not only but also — for scientific programming. Part of its popularity comes from the fact that is rather easy to learn. But most importantly, you can use Python for a broad range of tasks, e.g. text analysis, sequence analysis, mathematical computations, machine learning, visualization, and many more.

    This workshop gives you a practical introduction to the basics of programming in Python. We will focus on fundamental commands that are prerequisites for most use cases. Additionally, you'll get acquainted with the Pandas library, which enables efficient processing and analysis of tabular data. We will explore how to handle tables, analyze the data, and visualize it using a small number of commands. Our goal is to show you some of Python's capabilities, help you get started with programming and prepare you to take your next steps (on your own or in another course).

    To foster an interactive atmosphere among participants and intructors, this workshop if offered in person and not as a hybrid course.

    The course language is English.


    Prerequisites:

    No prior experience with programming is required.


    Certificate:

    This course is part of our Software Carpentry Workshop. In order to receive the Software Carpentry Certificate you also have to attend the other two courses.


    core areas:
    • variables and assignments
    • basic data types
    • basic flow control
    • working with tabular data
    • plotting data
    • writing and using functions

    instructors:
    • Eckhard Kadasch
    • Volker Schwartze
  • description:

    Git helps you avoid these sources of frustration. As a version control system, Git lets you easily save changes in your files to a history and thus helps with documenting your work. Using that history, you can see what you changed and when you did it. You can always go back and revert your project to an earlier stage, should you have accidentally deleted text or broke some functionality in your code. Git even lets you work together with others on the same project or even on the same file at the same time.

    In this workshop, we introduce you to the fundamental features of Git. You will learn how to use Git in your daily work to keep track of changes in your documents or code. Git has originally been designed for software development, but has quickly found users beyond the software community. If you consider yourself a non-technical person, this workshop is still for you. The Git basics are easy to learn and easy to apply.

    The course is taught in English.


    Requirements:

    During the course you will type along, following the instructors, preferably on your own machine. For this you need a working installation of Git (version 2.23 or above). Downloads and installation instructions for various operating systems can be found here: https://git-scm.com/downloads. If necessary, you can use computers provided at the workshop location, where all needed software is already installed.

    No special previous knowledge is required for the course. You should only be familiar with basic concepts of file systems, files and folders.


    Certificate:

    This course is part of our Software Carpentry Workshop. In order to receive the Software Carpentry Certificate you also have to attend the other two courses.


    core areas:
    • introduction to version control
    • install and config Git (git config)
    • create a repository (git init)
    • basic Git workflow: change - stage - commit (git add, git commit)
    • inspect status (git status)
    • explore the version history (git history)
    • compare versions (git diff)
    • revert changes (git restore, git reset)
    • use a graphical user interfaces (git gui, GitLab)

    instructors:
    • Frank Löffler
    • Philipp Schäfer
  • description:

    The command line is an interactive interface to the operating system of a computer. Instead of controlling your computer by clicking you type in commands. At first glance, controlling your computer this way might seem old-fashioned and uncomfortable. But if you are working with many files at the same time or are programming, using the command line is a very efficient tool.

    Searching through files in a directory and its subdirectories for a word or another sequence of characters as well as finding all files that have been modified in a certain period are examples of the command line’s strength. In addition, many scripts and applications written by other researchers can only be used from the command line.

    In this workshop we will teach the most common commands of Bash, a command line interface (or shell) first created for Unix, but now the most widespread and available for all major Desktop operating systems: macOS, Microsoft Windows, and Linux.

    The course language is English.


    Requirements:

    During the course you will type along, following the instructors, preferably on your own machine. For this you need a working installation of Bash. Bash should already be installed on all machines with Linux and MacOS as the operating system. To get Bash on Windows, you can install Git (https://git-scm.com/downloads), which contains GitBash. If necessary, you can use computers provided at the workshop location, where all needed software is already installed.

    No special previous knowledge is required for the course. You should only be familiar with basic concepts of file systems, files and folders.


    core areas:
    Use of the command line to
    • manage files and folders
    • start and control programs
    • search for files and within files
    • manipulate the content of files
    • create small scripts for repeating tasks

    instructors:
    • Christian Knüpfer
    • Philipp Schäfer
  • description:

    We will start by getting familiar with Bash, a command line shell and programming language. Knowing how to use Bash offers us access to various small programs that, when put together, can help automate tasks related to working with files and programs that can be accessed through a command line interface.

    Then we will get to know Git, a version control system. That means, we learn to track changes in source code: Who changed what, when, and for what reason. This can, for example, help track down bugs.

    Finally, we will learn to program in Python. From the very start we will work toward writing our first Python script.


    Requirements:

    During the course you will type along, following the instructors, preferably on your own machine. Please reach out to us, if you need help installing Git (https://git-scm.com/downloads) or Python (https://www.python.org/downloads/). Bash should be either preinstalled (MacOS/Linux) or comes with Git (Windows). If necessary, you can also use the on-site computers that have the required software preinstalled.

    No special previous knowledge is required for the course. You should only be familiar with basic concepts of file systems: files and folders.


    Certificate:

    In order to receive the Software Carpentry Certificate you have to attend on all four days. But you can also attend only one or two of the courses that make this workshop:


    core areas:
    • Command Line with Bash
    • Version Control with Git
    • Programming with Python

    instructors:
    • Eckhard Kadasch
    • Christian Knüpfer
    • Frank Löffler
    • Volker Schwartze
    • Philipp Schäfer
  • description:

    Whether biology, sociology, psychology or economics, whether questionnaire, measurement or instrument data: In science, research data are often processed in tabular form. However, numerous obstacles lurk in this process, and the wrong handling of tabular data can lead to problems in everyday research. In the first part of our workshop, we will show you how to organize your tabular data effectively and what you should pay attention to when formatting it. In practical exercises, we will apply the basics we have learned to test data sets. Although the exercises will be conducted in Excel, the concepts can be easily transferred to other applications.In the second part of the workshop, we will introduce you to the open source tool OpenRefine, which helps you prepare your research data for analysis. You will learn how to import your data into OpenRefine, find inconsistencies and fix them. Furthermore, we will show you how to save your cleaned data in suitable data formats. Finally, we will give you an insight into what other possibilities OpenRefine has in store to improve data quality. Also in this part of the workshop you will have the opportunity to try out what you have learned on a test data set. The workshop is aimed at researchers at Thuringian universities and research institutions. Basic knowledge of working with tables is helpful. Previous knowledge in OpenRefine is not necessary. Please note: To ensure seamless operation during the workshop, a short technical trial (5-10 minutes) is scheduled on October 24, 2023 within the offered time slots.


    instructors:
    • TKFDM
  • description:

    This Coffee Lecture is not a comprehensive introduction to OpenRefine. If you would like to get a more detailed insight into the tool and learn how data preparation works through practical exercises, we cordially invite you to our online workshop “Efficient organization and preparation of tabular data” on October 26th 2023. The course will be conducted by TKFDM and FDM-HAWK Project online from 8:30 to 12:30 and is open to all researchers at Thuringian institutions.


    instructors:
    • Cora Assmann
  • description:
    LaTeX is the standard software for publication of scientific documents in many fields. This workshop focuses on only that use of LaTeX (which includes theses) and assumes no previous knowledge. Many software products can be used to write texts. However, many of those that you may already know, like LibreOffice or Microsoft Word, pose challenges when used in a scientific context. LaTeX is designed to circumvent these challenges. Which of these are most important for you depends on your field of research, other tools you use and, last but not least, personal preferences, but they may include: LaTeX plays well with version contol systems like git, LaTex documents typically stay small, LaTeX can easily be edited not only by humans, but also by software, LaTeX can set mathematical expressions beautifully, LaTeX may already be the standard in your field of research. After this workshop you appreciate the differences between document editors like LibreOffice or Microsoft Word and systems like LaTeX. Being a hands-on workshop, the main focus will lie on you making your own first steps using LaTeX, if possible on your own computer. This workshop alone will likely not be enough for a beginner to use LaTeX in the future without further help or reference at all, but aims to provide a quick start as main author and a solid ground to write scientific publications in collaboration with more advanced users. Please bring your own device, as we aim to get you setup in your own environment. You will receive more information (e.g. what to install beforehand) closer to the workshop date (roughly a week beforehand).
    instructors:
    • Frank Löffler
    • Philipp Schäfer
  • description:

    We will start with a general introduction to the basic concepts of programming and into how a computer works. Then, you will learn the basic elements of programming — such as instructions, variables, loops, and conditional statements. As part of this workshop, you will write a program that finds the most frequently used words in an entire book, allowing you to gain an initial glimpse into the text’s content.

    We hold this workshop in an interactive and hands-on fashion with the teaching segments interleaved with many exercises in the Python programming language. To foster an interactive atmosphere between participants and intructors, we offer this workshop in person and not as a hybrid course.

    The course language is English.


    core areas:
     
    • basic data types
    • variables
    • basic flow control
    • functions
    • basic file reading and writing
    • command line arguments
     
    instructors:
    • Eckhard Kadasch
    • Volker Schwartze
  • description:
    Code is everywhere - and scientific research is no exception to this, whether it is in the STEM disciplines or, more recently, in the growing field of digital humanities or computational social science. Programming allows researchers to handle large amounts of digital data with ease, to automate tasks that would otherwise be time-consuming or even impossible to do, and to explore new approaches. Programming skills allow you to be more autonomous of pre-existing tools and to tailor your workflow to your own needs.Python is one of the world's most popular programming languages, not only but also, for scientific programming. Part of its popularity comes from the fact that is rather easy to learn. But most importantly, you can use Python for a broad range of tasks, e.g. text analysis, sequence analysis, mathematical computations, machine learning, visualization, and many more. This three-session workshop gives you a practical introduction to the basics of Python. It requires no prior experience with programming. Our goal is to show you some potentials of Python, help you get started with programming and prepare you to take your next steps (on your own or in another course). Please bring your own device, as we aim to get you setup in your own environment. You will receive more information (e.g. what to install beforehand) closer to the workshop date (roughly a week beforehand).
    instructors:
    • Frank Löffler
    • Philipp Schäfer
  • description:
    In diesem Online-Workshop lernen Sie, welche Tabelleneinstellungen hierfür hilfreich sind und wie Sie Bezüge und große Datenmengen nach unterschiedlichen Kriterien zusammenfassen und analysieren können.

    Der Workshop ist für Personen geeignet, die bereits Vorkenntnisse von Excel haben.
    instructors:
    • Informationsverarbeitung und angewandte Datentechnik GmbH
  • description:

    In this workshop, we are going to answer these questions. We start with explaining how Docker containers work and where the lines are between the container and the host operating system. Then you are going to learn — in practical exercises — how to use Docker's command-line interface to get containers, run and manage them, and to create your own container images.

    After this workshop you will be able take advantage of Docker in your own scientific work. You will be able to run applications in a Docker container on a workstation and on a cluster and also make your scientific workflows reproducible by creating and sharing your own Docker image.


    Prerequisites:

    In order to take part in this workshop, you should have basic knowledge of the Linux command line and should be able to navigate the file system.


    core areas:
    • Docker terminology: container image, container, Dockerfile
    • Downloading container images
    • Running containers
    • Managing containers and container images
    • Creating container images

    instructors:
    • Eckhard Kadasch
    • André Sternbeck
  • description:

    Its flexible programming interface makes easy plots easy but also allows to create very complex figures. Matplotlib is therefore an excellent tool for the everyday work of scientists, for which getting into Python is worthwhile alone.

    In this workshop, you will learn how to use Matplotlib for your scientific visualizations. We will look at the various types of plots Matplotlib can generate, how to style and annotate them, and how to export them in various formats. While doing so, we will explain the fundamental anatomy of a Matplotlib figure and give some advice on how to design plots well.

    At the end of this workshop you will not only be able to visualize your data, you will also have a tool at hand that lets you do this in a scriptable and, thus, repeatable fashion. The data changes — you can just rerun your script; no need for opening a plotting application, clicking, and manually adjusting and saving plots.


    Prerequisites:

    To take part in this workshop, you should be familiar with the basics of Python. Some experience with NumPy arrays is beneficial but not required.


    core areas:
    • available types of plots
    • anatomy of Matplotlib figures
    • object-oriented and MATLAB-style programming interface
    • plot styling and annotation
    • exporting plots

    instructors:
    • Frank Löffler
    • Volker Schwartze
  • description:

    While HPC clusters are composed of components similar to those found in PCs or workstations, they are used in a very different way. This is mainly owed to the fact that they consist of many computers networked together and that they are shared by multiple users.

    We start this workshop with explaining you the inner structure of a typical HPC cluster and highlight the differences to a workstation. You will then learn how to use the SLURM Workload Manager, which is used on the university cluster “Draco” to distribute compute jobs across the hardware. You will learn how to use it to perform various types of batch jobs and interactive tasks. During the hands-on sessions, you will submit your first compute jobs to the cluster yourself and hopefully enjoy their results. Additionally, we will provide guidance on how to install and use your own parallel software.

    Requirements:

    • FSU account (needs to be specified at the registration page)
    • no fear of linux and the command line

    instructors:
    • Eckhard Kadasch
    • André Sternbeck
  • description:

    Code is everywhere - and scientific research is no exception to this. Programming allows researchers to handle large amounts of digital data with ease, to automate tasks that would otherwise be time-consuming or even impossible to do, and to explore new approaches. Programming skills allow you to be more autonomous of pre-existing tools and to tailor your workflow to your own needs.</p>

    Python is one of the world's most popular programming languages, not only but also, for scientific programming. Part of its popularity comes from the fact that is rather easy to learn. But most importantly, you can use Python for a broad range of tasks, e.g. text analysis, sequence analysis, mathematical computations, machine learning, visualization, and many more.

    This workshop gives you a practical introduction to the basics of Python. It requires no prior experience with programming. Our goal is to show you some of the potential of Python, help you get started with programming and prepare you to take your next steps (on your own or in another course).

    This course is part of the Data Carpentry Workshop. If you wish to receive a Data Carpentry Certificate, you must attend all parts of the workshop. In this case, please register here.
    core areas:
    • basic data types
    • variables
    • basic flow control
    • functions
    • basic file reading and writing
    • command line arguments
    • basic debugging

    instructors:
    • Eckhard Kadasch
    • Volker Schwartze
  • description:
    The standard language to work with these databases is the Structured Query Language (SQL).

    In this course we will look at how to write queries to relational databases in SQL. We will start simple and move towards more complex queries, covering the following topics:
    • filtering
    • sorting
    • aggregating
    • joining (data from mutliple tables)

    During the course you will type and click along, following the instructors, preferably on your own machine. Please reach out to us, if you need help installing DB Browser for SQLite (https://sqlitebrowser.org/dl/). If necessary, you can also use the on-site computers that have DB Browser for SQLite preinstalled.
    core areas:
    • Writing SQL queries
    • Filtering
    • Sorting
    • Aggregating
    • Joining

    instructors:
    • Volker Schwartze
    • Philipp Schäfer
  • description:

    Spreadsheets, they are loved, hated and for many people indispensable. In science, they are a widely used way to organize data. However, there are many pitfalls and the uncritical handling of spreadsheets can lead to sever misunderstandings or problems, as the loss of data about more than 10,000 COVID-19 cases in the UK shows. But also without such severe consequences, spreadsheets can be a source of annoyance if files that were created by others or just in a different software are not understandable or usable without additional effort. In addition, good data documentation will be discussed and Colectica will be introduced as a tool. The workshop consists of theoretical and interactive part. The exercises are demonstrated in Excel, but can also be applied to other systems.

    This course is part of the Data Carpentry Workshop. If you wish to receive a Data Carpentry Certificate, you must attend all parts of the workshop. In this case, please register here.
    core areas:
    • Good practice in creating spreadsheets
    • Data documentation (metadata)
    • Colectica presentation

    instructors:
    • Cora Assmann
    • Volker Schwartze
  • description:

    Are you working with data organised in spreadsheets? Do you usually spend more time on data cleansing and data quality improvements than on data analysis? And do you want a powerful tool, that is free of charge and runs on every computer, including your local PC? If your answer to these questions is YES, then you should consider registering for this hands-on workshop.

    In this hands-on workshop we will first introduce what OpenRefine is and what it can do. You will learn how to import your data into OpenRefine, how to find and correct errors in your data, how to transform data, and how to save and export your cleaned data from OpenRefine. Finally, we will point you to additonal resources.
    Participating in this workshop does not require any prior knowledge of OpenRefine.
    Installation instructions will be sent to you 1 week before the course starts.

    This course is part of the Data Carpentry Workshop. If you wish to receive a Data Carpentry Certificate, you must attend all parts of the workshop. In this case, please register here.
    core areas:
    • Overview of OpenRefine application
    • Data import
    • Data error correction
    • Data transformation
    • Data storage and export

       

    instructors:
    • Cora Assmann
    • Christian Knüpfer
  • description:

    We will start by having a look at good practices for and possible pitfalls while creating spreadsheets. Though we use Excel to demonstrate, you can follow along with LibreOffice as well.

    Next we will work with OpenRefine. We will learn to import data, find and correct errors, transform data, and eventually save and export our cleaned data.

    Then we will go on to learn how to use the Structured Query Language (SQL) to query relational databases.

    Finally, we will learn to program in Python; from the very beginning. We will work toward writing our first Python script.

    During the course you will type and click along, following the instructors; for SQL and Python preferably on your own machine. Please reach out to us, if you need help installing LibreOffice (https://www.libreoffice.org/download/), OpenRefine (https://openrefine.org/download.html), DB Browser for SQLite (https://sqlitebrowser.org/dl/) or Python (https://www.python.org/downloads/). If necessary, you can also use the on-site computers that have the required software preinstalled.

    You can also attend only the courses on the individual topics. If you wish to do so, please register accordingly:
    core areas:
    * Working with Spreadsheets
    * Cleaning Data with OpenRefine
    * Querying Relational Databases with SQL
    * Programming in Python
    instructors:
    • Cora Assmann
    • Eckhard Kadasch
    • Volker Schwartze
    • Philipp Schäfer
  • description:

    In this course we look at how to develope Software using the Julia programming language. We cover the idiosyncrasies of Julia as a programming language, learn how a Julia project is typically structured, look at package management and mention a few important packages, and look at how to call into software written in other programming languages.

    During the course you will type along, following the instructors, preferably on your own machine. Please reach out to us, if you need help installing Julia. If necessary, you can also use the on-site computers that have Julia preinstalled.

    We assume that you have experience programming in general, but experience with Julia is not required


    core areas:
    • Name Julia’s idiosyncrasies
    • Navigate Julia’s documentation
    • Call from Julia into code written in other languages
    • Find existing Julia libraries

    instructors:
    • Eckhard Kadasch
    • Philipp Schäfer
  • description:

    We will start by getting familiar with Bash, a command line shell and programming language. Knowing how to use Bash offers us access to various small programs that, when put together, can help automate tasks related to working with files and programs that can be accessed through a command line interface.

    Then we will learn how to use Git, a version control system. That means, we learn to track changes in source code: Who changed what, when, and—if we use it properly—for what reason. This can, for example, help track down bugs.

    Finally, we will learn to program in Python; from the very start. We will work toward writing our first Python script.

    During the course you will type along, following the instructors, preferably on your own machine. Please reach out to us, if you need help installing Git (https://git-scm.com/downloads) or Python (https://www.python.org/downloads/) (Bash should be either preinstalled (MacOS/Linux) or comes with Git (Windows)). If necessary, you can also use the on-site computers that have the required software preinstalled.

    We expect you to be familiar with the basic concepts of file systems: files and directories.
    core areas:
    • Command Line with Bash
    • Version Control with Git
    • Programming with Python

    instructors:
    • Eckhard Kadasch
    • Christian Knüpfer
    • Frank Löffler
    • Volker Schwartze
    • Philipp Schäfer
  • description:
    Due to the increasing digitization and datafication in all fields of research, the proper management of research data becomes increasingly important.
    You spent months on collecting samples and measurements in the field or in the lab? You explored, analyzed, and interpreted this data and finally published your findings in a scientific journal? Well, then it is time to think about your data again and what to do with it now. Or are you just starting your PhD or your postdoc project and want to make sure not to overlook anything when it comes to obtaining and documenting your measurements?
    According to the guidelines for safeguarding good scientific practice your results should be replicable and repeatable. Are you aware of the concept of FAIR data, that is mentioned in the research data policies of many funders, institutions, and journals? FAIR means that data are findable, accessible, interoperable, and re-usable. To ensure this, your data should be well documented, securely stored and available for later reuse. Publishing your research data through a dedicated data journal or repository may help you on this and may also get you an additional publication and further citations.
    A few days before the course starts, you will be given access to the preparation material (Moodle). It is recommended that you work through the material beforehand as it will be referred to in the course.


    Topics:
    • Basic definitions in research data management and the data life cycle
    • Data management plans (DMP)
    •  Documentation, data organization, metadata
    • Storage and back-up
    • Archiving
    • Publication and re-use of research data
    • Legal aspects
    Course dates: May 8 and May 11, 9-13 h
     
    Content focus
    • Introduction to research data management and the data-life-cycle concept
    • Preparing research data for re-use (data structure, data quality, metadata)
    • Opportunities and requirements in data publication and long-term data archiving

    instructors:
    • Cora Assmann
    • Roman Gerlach
  • description:

    Are you working with data organised in spreadsheets? Do you usually spend more time on data cleansing and data quality improvements than on data analysis? And do you want a powerful tool, that is free of charge and runs on every computer, including your local PC? If your answer to these questions is YES, then you should consider registering for this hands-on workshop.

    In this hands-on workshop we will first introduce what OpenRefine is and what it can do. You will learn how to import your data into OpenRefine, how to find and correct errors in your data, how to transform data, and how to save and export your cleaned data from OpenRefine. Finally, we will point you to additional resources.

    Participating in this workshop does not require any prior knowledge of OpenRefine. Installation instructions will be sent to you one week before the course starts.
    core areas:
    • Overview of OpenRefine application
    • Data import
    • Data error correction
    • Data transformation
    • Data storage and export

       

    instructors:
    • Cora Assmann
    • Volker Schwartze
  • description:

    Git is a version control system for text files. It helps you keep a history of changes to your files. Using that history, you can see what you changed and when you did it. You can always revert your project to an earlier state, should you have accidentally deleted text or broke some functionality in your code. Git also lets you work together with others on the same project, keeping track of who changed what and in what order.

    GitLab is a platform that supports collaborating on a Git managed project and offers additional features. The university hosts an instance of GitLab that can be used by all employees and students.

    In this workshop, you will get to know Git and the platform GitLab. You will learn how to use Git in your daily work to keep track of changes to your code and other text documents. You will also work with GitLab, which helps you to collaborate with others on your projects. With the integrated issue tracking system and the option of hosting websites directly from your Git repositories, GitLab offers additional project management features that you will try out in practice.

    During the course you will type and click along, following the instructors, preferably on your own machine. Please reach out to us, if you need help installing Git (https://git-scm.com/downloads). You do not need to install GitLab, as it is accessed through the browser. Participants, however, can use the on-site computers with Git preinstalled, if necessary.

    It is not necessary to have previous experience using Git.

    We will meet on Tuesday and Thursday (not on Wednesday!) from 8:15 am to 12:00 am.


    core areas:
    • Version Control with Git
    • Connecting local with GitLab repositories
    • Using the GitLab web interface

    instructors:
    • Frank Löffler
    • Philipp Schäfer
    • André Sternbeck
  • description:
    What options and tools are available for analyzing digitized museum inventories? How can data on museum objects be collected, compiled and systematized? To what extent can research questions, based on existing museum data, be formulated and finally answered with the help of digital methods and tools? The participants of the seminar get to the bottom of these questions. Research questions about museum holdings can be asked specifically or formulated using the museum data that has been determined. The seminar serves as an introduction to the API query method. The programming language Python and the tool JupyterNotebook are used for this specific method for determining museum data. The Victoria & Albert Museum London and its digitized holdings will be used as a case study in the seminar. The data of the V&A Museum can be queried by the participants of the seminar with regard to diversity criteria (gender, origin), acquisition periods, questions of restitution, material specifics, genres, groups of works by artists, etc. This seminar will be held in German.
    instructors:
    • Elodie Sacher
    • Sander Münster
    • Ferdinand Maiwald
  • description:

    Based on the prior workshop, we talk about another part of the Adobe Suite: After Effects - and look into animation and video graphic effects to push your next funding proposal or science communication bit over the top!

  • description:

    Since everyone at the FSU Jena has a Adobe Creative Suite License

    • let’s actually use it! We talk about the basics of video editing (how to cut, compose, work with sound, titles and basic animation) in Adobe Premiere Pro. You can also follow the workshop with freeware alternatives like DaVinci Resolve or Shotcut.
  • description:

    Code is everywhere - and scientific research is no exception to this. Programming allows researchers to handle large amounts of digital data with ease, to automate tasks that would otherwise be time-consuming or even impossible to do, and to explore new approaches. Programming skills allow you to be more autonomous of pre-existing tools and to tailor your workflow to your own needs.</p>

    Python is one of the world's most popular programming languages, not only but also, for scientific programming. Part of its popularity comes from the fact that is rather easy to learn. But most importantly, you can use Python for a broad range of tasks, e.g. text analysis, sequence analysis, mathematical computations, machine learning, visualization, and many more.

    This workshop gives you a practical introduction to the basics of Python. It requires no prior experience with programming. Our goal is to show you some of the potential of Python, help you get started with programming and prepare you to take your next steps (on your own or in another course).


    core areas:
    • basic data types
    • variables
    • basic flow control
    • functions
    • basic file reading and writing
    • basic plotting
    • basic debugging

    instructors:
    • Frank Löffler
    • Philipp Schäfer
  • description:
     
    Nach Monaten der Datenerhebung, Analyse und Interpretation der Daten möchten Sie Ihre Ergebnisse nun in einer Fachzeitschrift veröffentlichen? Dann ist es an der Zeit, Ihre Daten noch einmal genauer zu betrachten und darüber nachzudenken, wie sie jetzt aufbereitet werden können. Oder stehen Sie gerade in den Startlöchern Ihrer Doktorarbeit oder Ihres Postdoc-Projekts und möchten sichergehen, dass Sie bei der Durchführung und Dokumentation Ihrer Forschung nichts übersehen haben?
     
    Gemäß den DFG Leitlinien zur Sicherung guter wissenschaftlicher Praxis sollen Ihre Ergebnisse nachvollziehbar und reproduzierbar sein. Haben Sie schon mal etwas von FAIRen Daten gehört? In Bezug auf Ihre Daten bedeutet dies, dass sie Findable (auffindbar), Accessibale (zugänglich), Interoperable (interoperabel) und Reusable (wiederverwendbar) sein sollen. Sind Sie sich bewusst, dass die Veröffentlichung Ihrer Daten in einem speziellen Datenjournal oder Repositorium Ihnen nicht nur helfen kann, diese Anforderungen zu erfüllen, sondern dass Sie dadurch auch eine zusätzliche Publikation und weitere Zitierungen erhalten können?
     
    Die Veröffentlichung und Langzeitarchivierung Ihrer Daten sind nur zwei Aspekte des Forschungsdatenmanagements. Dieser Workshop soll Ihnen dabei helfen, Ihre Bedürfnisse an das Datenmanagement zu ermitteln, unabhängig davon, in welcher Phase des Projekts Sie sich befinden. Zudem soll er Ihnen eine praktische Anleitung geben, wie Sie Ihre Daten organisieren, strukturieren, beschreiben und veröffentlichen können, um die Anforderungen der guten wissenschaftlichen Praxis zu erfüllen.
     
    Themen des Kurses:
    • Definition Forschungsdatenmanagement und Lebenszyklus von Forschungsdaten
    • Datenmanagementpläne
    • Dokumentation, Datenorganisation, Metadaten
    • Speicherung und Back-up
    • Archivierung
    • Veröffentlichung und Nachnutzung von Forschungsdaten
    • Rechtliche Aspekte
     
    Es handelt sich um einen Online-Kurs mit Moodle und Live-Videokonferenzen. Wir werden vor den beiden Sitzungen Selbstlernmaterialien zur Verfügung stellen. Dabei wird von den Teilnehmern erwartet, dass Sie das Material vorher anschauen und gestellte Aufgaben bearbeiten. Während der Live-Sitzungen wird es Übungen, Gruppenarbeiten, Diskussionen und Präsentationen geben.

    Workshoptermine: 15. und 17.02.2023
    instructors:
    • Roman Gerlach
    • Jeanin Jügler
  • description:

    GitLab is a web application for managing Git repositories. Since it is build around to Git, it is suitable to manage any project that mostly works with plain text files, for example software source code or TeX based documents. With its built-in issue and wiki systems, it can, in certain cases, even be the right tool to for managing a project without any files.

    This course will give you a foundational understanding of GitLab’s features, so that you can make informed decisions on how to use it as a tool.

    During the whole time, learners will follow along the instructors’ demonstrations, putting what they learn immediately into practice.

    It is not necessary to have previous experience with Git. To get the most out of the section on task automation, a very basic understanding of Docker is helpful, but not required.
    core areas:
    • Navigate GitLab
    • Create, use, and delete GitLab projects
    • Collaborate on GitLab projects
    • Automate Tasks in GitLab
    • Manage projects in GitLab
    • Document projects in GitLab wikis

    instructors:
    • Philipp Schäfer
    • Frank Löffler
  • description:
    Die Veranstaltung gibt Ihnen einen Einblick in die wichtigsten Arbeitsabläufe des Universitätsarchivs Jena. Sie erfahren Wissenswertes zu den Beständen und Nutzungsmöglichkeiten des Archivs sowie den Aufbewahrungsfristen und der Archivierung von Schriftgut. Die Mitarbeitenden des Universitätsarchivs freuen sich über einen interessierten Austausch mit Ihnen.
  • description:
    Die Fortbildung soll Sie in die Lage versetzen, alle in der Hochschulpraxis relevanten rechtlichen und technischen Anforderungen des Datenschutzes umzusetzen. Anhand von konkreten Beispielen und Ihren Fragen werden datenschutzrechtliche Grundsätze und Grundzüge der Informationssicherheit anschaulich erläutert.
    instructors:
    • Maximilian Koop
  • description:
    Research Data Management (FDM) comprises all activities in the handling of research data from generation, documentation and storage to publication and archiving. In order to take into account the multitude of aspects in FDM, a data management plan (DMP) should be drawn up before the project starts. This plan should document the handling of the data generated in the research project and specify the resources required. Appropriate research data management and the creation of a DMP is a prerequisite for more and more funding organizations when applying for projects and is therefore an important part of project planning. In addition, good planning also helps to take costs into account from the outset when applying for funding, to ensure support from appropriate partners and to establish the necessary infrastructure to ensure effective and secure handling of research data during the project.  
    The workshop will give an overview of the requirements of the different funding organizations regarding FDM and the creation of DMPs. In addition, the structure and content of the DMPs as well as useful support options in the form of consulting services and tools will be presented. In addition, participants get the opportunity to practice drafting texts for DMPs during exercise sessions.
    core areas:
    • Requirements of different funding institutions
    • Structure and content of a Data Management Plan
    • Useful tools and services
    • Exercises to draft DMPs

    Speaker:
    Roman Gerlach | Servicedesk Research Data Management
    Dr. Cora Assmann | Thuringian Competence Network for Research Data Management
  • description:

    The two-day course will be held January 24 and January 31, 2023 from 8 a.m. to 12.

    This is a hands-on introduction to programming. You will learn the most important concepts of programming with practical exercises using the language R. R is a well-documented, popular, and easily accessible programming language which is especially well suited for the analysis and manipulation of research data. Built around the scientific task of data analysis you will learn how to read and access data, calculate simple statistics, index and plot the data, create functions for reoccurring tasks, as well as how to use if-else statements and loops. We will also cover best practices for writing code in R and how to export the results. No prior knowledge necessary.</p>

    We will use the integrated development environment (IDE) R Studio throughout the workshop. Please install the language and the IDE before attending the course.

    This workshop is based on the Software Carpentry lesson Programming with R.
    core areas:
    • using RStudio
    • variables
    • data types
    • indexing data
    • analysing data
    • plotting
    • choices
    • loops
    • reading and writing data
    • code documentation
    • packages
    • using R scripts in workflows

    instructors:
    • Christian Knüpfer
    • Volker Schwartze
  • description:
    We will give an overview on the different ways to parallelize a given task and will make you familiar with the Linux command line. In the hands-on part you will submit your first computations (jobs) to the cluster and
    hopefully enjoy their results.</p>

    Requirements:

    • FSU account (needs to be specified at the registration page)
    • no fear of linux and the command line

    instructors:
    • Frank Löffler
    • André Sternbeck
  • description:

    Code is everywhere - and scientific research is no exception to this. Programming allows researchers to handle large amounts of digital data with ease, to automate tasks that would otherwise be time-consuming or even impossible to do, and to explore new approaches. Programming skills allow you to be more autonomous of pre-existing tools and to tailor your workflow to your own needs.</p>

    Python is one of the world's most popular programming languages, not only but also, for scientific programming. Part of its popularity comes from the fact that is rather easy to learn. But most importantly, you can use Python for a broad range of tasks, e.g. text analysis, sequence analysis, mathematical computations, machine learning, visualization, and many more.

    This workshop gives you a practical introduction to the basics of Python. It requires no prior experience with programming. Our goal is to show you some of the potential of Python, help you get started with programming and prepare you to take your next steps (on your own or in another course).


    core areas:
    • basic data types
    • variables
    • basic flow control
    • functions
    • basic file reading
    • basic debugging

    instructors:
    • Frank Löffler
    • Philipp Schäfer
  • description:
     
    This introduction into R includes:
    • General introduction into the environment.
    • Basics of R syntax and objects.
    • Data handling in R.
    • Basic programming in R.
    • Graphics in R.
     
    This workshop addresses researchers interested in R without or with few previous experiences in R. This workshop includes hands- on exercises and a homework assignment.
     
    Requirements:
    For this workshop please install the current versions of R (https://cran.r-project.org/) and RStudio

    Workshop Dates:
    9. and 10. and 16. and 17.1. 2023,  01.00 p.m. - 05.00 p.m. (4 afternoons)
    instructors:
    • Jan Plötner
  • description:

    Within this workshop we will spend only very little time on what LaTeX can do, but will instead concentrate on you actually making your first steps. This workshop alone will likely not be enough for a beginner to use LaTeX in the future without further help or reference, but it should give a good start and includes pointers where to turn for example use.


    core areas:
    • document structure
    • basic formatting
    • symbols and math
    • images and figures
    • citations

    instructors:
    • Frank Löffler
    • Philipp Schäfer
  • description:

    The two-days online course, offered by the Research Data Management Helpdesk (Uni Jena) and ZB Med (Information Centre for Life Sciences), consists of knowledge transfer with a special focus on biomedical RDM topics as interactive elements.
    core areas:
    • Basic definitions in research data management and the data life cycle
    • Data management plans (DMP) and DMP tools
    • Data collection, processing and analysis, publishing and sharing, preservation, reuse and search
    • Legal aspects and Licenses
    • Introduction to local and national RDM support facilities

    instructors:
    • Cora Assmann
    • Luiz Gadelha
  • description:

    If you have ever written a paper, worked with research data or programmed your own scripts, some of the following problems may sound familiar to you: You have accidentally overwritten something and would like to get it back from an earlier version of your file(s). You find yourself looking through a bunch of older versions wondering what exactly has changed between your current version and the older ones.

    Git helps you avoid these sources of frustration. As a version control system, Git lets you easily save changes in your files to a history and thus helps documenting your work. Using that history, you can see what you changed and when you did it. You can always go back and revert your project to an earlier stage, should you have accidentally deleted text or broke some functionality in your code. Git even lets you work together with others on the same project or even on the same file at the same time.

    In this workshop, we introduce you to the fundamental features of Git. You will learn how to use Git in your daily work to keep track of changes in your documents or code. Git has been originally designed for software development, but has quickly found users beyond the software community. So if you consider yourself a non-technical person, this workshop is still for you. The Git basics are easy to learn and easy to apply.


    Requirements: For this workshop you need a working installation of Git (version 2.23 or above). Downloads and installation instructions for various operating systems can be found here: https://git-scm.com/downloads.
    Certificate: This course is part of the Certificate Course "Tools for Digital Research". In order to receive the Library Carpentry Certificate you also have to attend the other two courses.
    core areas:
    • introduction to version control
    • install and config Git (git config)
    • create a repository (git init)
    • basic Git workflow: change - stage - commit (git add, git commit)
    • inspect status (git status)
    • explore the version history (git history)
    • compare versions (git diff)
    • revert changes (git restore, git reset)
    • use a graphical user interfaces (git gui, GitLab)

    instructors:
    • Christian Knüpfer
    • Philipp Schäfer
  • description:

    The command line is an interactive interface to your operating system. Instead of controlling your computer by clicking and dragging with the mouse you type in commands on the so-called command line or shell. Controlling your computer by hammering at the keyboard looks really old-fashioned and uncomfortable at the first glance. But if you are working with a lot of data or are programming, using the command line is a very efficient instrument. After some training period you will not want to miss it anymore. Command line interfaces are available on essentially all operating systems, including Linux, Mac OS as well as Microsoft Windows.

    In this workshop we will concentrate on common commands within the Unix/Linux command line, which is also available on Windows.


    Certificate: This course is part of the Certificate Course "Tools for Digital Research". In order to receive the Library Carpentry Certificate you also have to attend the other two courses.
    core areas:
    Use of the command line to
    • manage files and folders
    • start and control programs
    • search for files and within files
    • manipulate the content of files
    • create small scripts for repeating tasks

    instructors:
    • Christian Knüpfer
    • Philipp Schäfer
  • description:

    Are you working with data organised in spreadsheets? Do you usually spend more time on data cleansing and data quality improvements than on data analysis? And do you want a powerful tool, that is free of charge and runs on every computer, including your local PC? If your answer to these questions is YES, then you should consider registering for this hands-on workshop.

    In this hands-on workshop we will first introduce what OpenRefine is and what it can do. You will learn how to import your data into OpenRefine, how to find and correct errors in your data, how to transform data, and how to save and export your cleaned data from OpenRefine. Finally, we will point you to additonal resources.
    Participating in this workshop does not require any prior knowledge of OpenRefine.
    Installation instructions will be sent to you 1 week before the course starts.

    This workshop is based on the Library Carpentry lesson OpenRefine.
     
    Certificate: This course is part of the Certificate Course "Tools for Digital Research". In order to receive the Library Carpentry Certificate you also have to attend the other two courses.
    core areas:
    • Overview of OpenRefine application
    • Data import
    • Data error correction
    • Data transformation
    • Data storage and export

       

    instructors:
    • Cora Assmann
    • Christian Knüpfer
  • description:
    • automate repetitive, boring, error-prone tasks,
    • create, maintain and analyze sustainable and reusable data,
    • work effectively with IT-systems and colleagues,
    • better understand the use of software in research,
    • and much more.

    By attending three courses on OpenRefine, Command Line and Git, you can earn a Library Carpentry Certificate. The Carpentries is an international non-profit organization that aims to teach basic data and software skills to support efficient, open, and reproducible research.

    The Library Carpentry Certificate course consist of these three lessons:

    1. Data cleansing and quality improvement with OpenRefine, November 08 2022, 8-12 am
    2. Introduction to the Command Line, November 15 2022, 8-12 am
    3. Basic Version Control with Git: A Beginner's Workshop, November 22 2022, 8-12 am

    The three courses can also be attended individually. In order to receive the Library Carpentry Certificate you have to attend all three lessons. In case you want to receive the certificate, please simply register for all three courses!


    core areas:
    • OpenRefine
    • The Unix Shell
    • Version Control with Git

    instructors:
    • Cora Assmann
    • Christian Knüpfer
    • Philipp Schäfer
  • description:
    Methoden der Deskriptiv- und Inferenzstatistik sind das grundlegende Handwerkszeug bei der Auswertung quantitativer Daten. In dem Workshop werden wir grundlegende statistische Methoden kennenlernen und mit Hilfe des Auswertungsprogramms SPSS auch praktisch durchführen. Dazu gehören die tabellarische und grafische Aufbereitung von Daten, die Berechnung wichtiger  Kennwerte sowie grundlegende Verfahren der Inferenzstatistik wie Signifikanztests. Die Verfahren werden dabei zunächst theoretisch vorgestellt und dann an Datenbeispielen selbst durchgeführt.
     
    Der Kurs richtet sich an Promovierende und Postdocs, die bisher nicht oder selten mit statistischen Methoden arbeiten oder ihr Grundwissen aus dem Studium auffrischen wollen.
    instructors:
    • Christof Nachtigall
  • description:

    Entsprechend der tatsächlich gespeicherten Informationen, den grundsätzlichen Anforderungen an die Speicherung und die Abrufbarkeitsoptionen stehen verschiedene Speicherdienste am Rechenzentrum zur Verfügung.

    Diese Veranstaltung richtet sich an alle die Daten zentral im Netz der Universität speichern wollen. Im Besonderen an Forschende, Lehrende, Informationsverarbeitungsverantwortliche (IVV) sowie Mitarbeitende und Sekretariatsfachkräft.
    core areas:
    Storage - Speichern von Daten
    • Anwendungsbereiche und Besonderheiten des Storage
    • Rahmenbedingungen für die Nutzung
    • Ausblick: Pflege von Nutzenden / Gruppenverwaltung
    Backup - Sichern von Daten
    • Einsatzmöglichkeiten eines Backup
    • Anwendungsbereiche und Besonderheiten
    • Abgrenzung zur Archivierung von Daten
    Archiv - Aufbewahren von Daten
    • Wichtige Rahmenbedingunge
    • Modalitäten und Besonderheiten bei der Datenaufbewahrung
    • Langzeitspeicherung

    instructors:
    • Rechenzentrum der Universität
  • description:

    Spreadsheets, they are loved, hated and for many people indispensable. In science, they are a widely used way to organize data. However, there are many pitfalls and the uncritical handling of spreadsheets can lead to sever misunderstandings or problems, as the loss of data about more than 10,000 COVID-19 cases in the UK shows. But also without such severe consequences, spreadsheets can be a source of annoyance if files that were created by others or just in a different software are not understandable or usable without additional effort.
    In addition, good data documentation will be discussed and Colectica will be introduced as a tool. The workshop consists of theoretical and interactive part. The exercises are demonstrated in Excel, but can also be applied to other systems.
    core areas:
    • Good practice in creating spreadsheets
    • Data documentation (metadata)
    • Colectica presentation

    instructors:
    • Cora Assmann
    • Volker Schwartze
  • description:

    3D models are digital representations of (real) objects. Although the first industries that come to mind are probably the film and gaming industries, 3D models are used in a variety of other areas of work and life.

    For example, they can be used to visualize plans of buildings or to design new products. Many products that we use in everyday life are created on the basis of such models. But 3D models are also used in the field of medicine in diagnostics or for the production of individual prostheses. However, due to the continuous development of 3D printing technologies, digital 3D models are also becoming more and more relevant in the private sector.

    Especially in science, 3D models can play an important role, e.g. in the digitization of historical objects and buildings or archaeological finds (keyword Digital Humanities), as well as in the investigation of geological or physical processes or in the visualization of objects that are otherwise difficult to capture, such as chemical structures or astronomical objects. The fields of application of 3D models are very diverse and cover (almost) all disciplines.

    The workshop is addressed to all students, teachers, researchers and all other interested persons. No special prior knowledge is required.

    This workshop is organized by the Data Literacy Project of the University of Jena in cooperation with Lichtwerkstatt Jena and Prof. Sander Münster. If you have any questions about the event, please feel free to contact us at: dataliteracy@uni-jena.de.


    core areas:
    • Goals and application areas of 3D models
    • Basics of approaches and techniques
    • Practical introduction to the 3D modeling software Blender
    • Practical introduction to 3D scanning by photogrammetry
    • 3D printing

    instructors:
    • Volker Schwartze
    • Sander Münster
    • Johannes Kretzschmar
  • description:

    Dieser Workshop soll Sie dabei unterstützen Ihre vielfältigen Aufgaben bestmöglich umzusetzen. Für einen umfassenden Überblick werden Ihnen neben den bekannten Diensten auch die Neuerungen zu spezifischen Themen vorgestellt. Beispielsweise bietet das URZ neben dem klassischen wissenschaftlichen Rechnen über die Kommandozeile, zukünftig auch Web-basierte Schnittstellen zur interaktiven Nutzung der HPC-Ressourcen an. In der Diskussionsrunde am Ende können gerne weiterführende Fragen gestellt werden. 

    Der Kurs richtet sich im speziellen an Mitarbeitende der Friedrich-Schiller-Universität.


    core areas:
    • Das URZ im Überblick - wichtige Dienste und Leistungen
    • Wissenschaftliches Rechnen und Datenaufbewahrung
    • eLearning  - Herausforderungen effizient meistern
    • Fragen und Diskussion

    instructors:
    • Rechenzentrum der Universität
  • description:

    The use of digital tools is an important basis for dealing with the growing and increasingly complex data sets. The problem is not limited to science, but affects almost all areas of our society. Knowledge of how to use programming languages often enables fast and flexible approaches to solving problems when working with data.

    The summer school is aimed at all students who work predominantly with numerical data and want to learn the basics of programming with Python. It is therefore particularly suitable for students from the fields of natural, life, economic, behavioral and social sciences and medicine, but is also open to all other interested parties.

    The course first teaches basic concepts and fundamental principles of programming. After first attempts in Python, practical exercises are worked on independently. All according to the motto: "Learning by doing!"

    The summer school is organized by the Data Literacy Jena (DaLiJe) project in collaboration with the Bioinformatics Core Facility Jena.

    Registration: Friedolin.
     
    core areas:
    • Basics of programming and numerics with Python
    • Specialization in processing and visualization of numerical data

    instructors:
    • Emanuel Barth
  • description:

    In this workshop, we are going to answer these questions. We start with explaining how Docker containers work and where the lines are between the container and the host operating system. Then you are going to learn — in practical exercises — how to use Docker's command-line interface to get containers, run and manage them, and to create your own container images.

    After this workshop you will be able take advantage of Docker in your own scientific work. You will be able to run applications in a Docker container on a workstation and on a cluster and also make your scientific workflows reproducible by creating and sharing your own Docker image.


    Prerequisites:

    In order to take part in this workshop, you should have basic knowledge of the Linux command line and should be able to navigate the file system.


    core areas:
    • Docker terminology: container image, container, Dockerfile
    • Downloading container images
    • Running containers
    • Managing containers and container images
    • Creating container images
    • Running Docker containers on a HPC cluster with Singularity

    instructors:
    • Eckhard Kadasch
    • André Sternbeck
  • description:

    Its flexible programming interface makes easy plots easy but also allows to create very complex figures. Matplotlib is therefore an excellent tool for the everyday work of scientists, for which getting into Python is worthwhile alone.

    In this workshop, you will learn how to use Matplotlib for your scientific visualizations. We will look at the various types of plots Matplotlib can generate, how to style and annotate them, and how to export them in various formats. While doing so, we will explain the fundamental anatomy of a Matplotlib figure and give some advice on how to design plots well.

    At the end of this workshop you will not only be able to visualize your data, you will also have a tool at hand that lets you do this in a scriptable and, thus, repeatable fashion. The data changes — you can just rerun your script; no need for opening a plotting application, clicking, and manually adjusting and saving plots.


    Prerequisites:

    To take part in this workshop, you should be familiar with the basics of Python. Some experience with NumPy arrays is beneficial but not required.


    core areas:
    • available types of plots
    • anatomy of Matplotlib figures
    • object-oriented and MATLAB-style programming interface
    • plot styling and annotation
    • exporting plots

    instructors:
    • Eckhard Kadasch
    • Frank Löffler
  • description:

    One of those tools is the NumPy package. NumPy provides Python with an efficient array datatype and accompanying compute functions which together form the foundation of many of todays scientific libraries.

    In this workshop, you are going to learn how use NumPy to solve your own computing tasks. We start by discussing what makes Python slow compared to other languages and how NumPy arrays remedy the situation. We are going to look at NumPy's memory model, introduce you to the most useful functions of the package, and show how you can use NumPy for tasks from element-wise array operations, over linear algebra, to the implementatin of numerical methods.


    Prerequisites:

    To take part in this workshop, you should be familiar with the basics of Python.


    core areas:
    • performance limitations of Python
    • memory model of NumPy arrays
    • how to create and work with NumPy arrays
      • important NumPy functions
      • avoiding Python loops with array operations
    • application in linear algebra and numerical methods
    • performance considerations: temporary arrays, copies, and views

    instructors:
    • Eckhard Kadasch
    • Frank Löffler
  • description:

    In this course we look at how to write professional code in the Julia programming language. We start by covering the idiosyncrasies of Julia, continue with properly structuring a Julia project, learn how to write efficient code in Julia, and mention a few important packages as well as how to call into software written in other programming languages.

    Learners will continuously follow the instructors, programming in their own Jupyter notebooks.

    We assume that learners have experience programming in general, but experience with Julia is not required
    core areas:
    • Name Julia’s idiosyncrasies
    • Navigate Julia’s documentation
    • Choose the right data structures for efficient code
    • Call from Julia into code written in other languages
    • Find existing Julia libraries

    instructors:
    • Philipp Schäfer
  • description:

    GitLab is a web application for managing Git repositories. Since it is build around to Git, it is suitable to manage any project that mostly works with plain text files, for example software source code or TeX based documents. With its built-in issue and wiki systems, it can, in certain cases, even be the right tool to for managing a project without any files.

    This course will give you a foundational understanding of GitLab’s features, so that you can make informed decisions on how to use it as a tool.

    During the whole time, learners will follow along the instructors’ demonstrations, putting what they learn immediately into practice.

    We assume basic understanding of Git and the Unix shell. Having taken recent courses on either topic is sufficient. To get the most out of the section on task automation, a very basic understanding of Docker is helpful, but not required.
    core areas:
    • Navigate GitLab
    • Create, use, and delete GitLab projects
    • Collaborate on GitLab projects
    • Automate Tasks in GitLab
    • Manage projects in GitLab
    • Document projects in GitLab wikis

    instructors:
    • Frank Löffler
    • Philipp Schäfer
  • description:

    Python is one of the world's most popular programming languages, not only but also, for scientific programming. Part of its popularity comes from the fact that is rather easy to learn. But most importantly, you can use Python for a broad range of tasks, e.g. text analysis, sequence analysis, mathematical computations, machine learning, visualization, and many more.

    This workshop gives you a practical introduction to the basics of Python. It requires no prior experience with programming. Our goal is to show you some of the potential of Python, help you get started with programming and prepare you to take your next steps (on your own or in another course).


    core areas:
    • basic data types
    • variables
    • basic flow control
    • functions
    • basic file reading and writing
    • command line arguments
    • basic debugging

    instructors:
    • Eckhard Kadasch
    • Frank Löffler
  • description:

    This introduction into R includes:
    • General introduction into the environment.
    • Basics of R syntax and objects.
    • Data handling in R.
    • Basic programming in R.
    • Graphics in R.

    This workshop addresses researchers interested in R without or with few previous experiences in R. This workshop includes hands- on exercises and a homework assignment.

    Requirements:
    For this workshop please install the current versions of R (https://cran.r-project.org/) and RStudio (https://rstudio.com/products/rstudio/download/#download) before the workshop.

    Recommendations:
    A major part of this workshop will be spend working in R. In order to avoid switching between my shared screen and your computer, I would recommend to use two monitors for this workshop.

    Workshop dates:
    The workshop will consist of four afternoon sessions:
    May 23 and 24 and June 02 and 03, 2022; 1.00 p.m. – 5.00 p.m.
    instructors:
    • Jan Plötner
  • description:

    If you are interested in learning Git from scratch, please register for the first part Basic Version Control with Git: A Beginner's Workshop (see our catalogue).

    If you work on documents or code together with mutliple people, it can quickly get quite complex to keep track of everyone‘s changes. Maybe you e-mail different versions back and forth and start to loose track of the individual contributions. Or you use a shared folder on Nextcloud or Dropbox but run the risk of overwriting other peoples changes, when working on the same file simultaneously. This is where Git can help you.

    Git is not only a great tool for versioning your own projects, it also provides you a robust framework for collaborating, that is for keeping track of everyone‘s changes and for integrating them into one repository — be it code, documents, and even data. And Git scales form one, to two, to many people.

    In this workshop, you learn how to use Git's collaborative features. You will learn how to organize your work in branches, merge them together, as well as how to share your work with others using remote repositories and resolve any conflicts that may arise.


    Prerequisites: If you want to join this workshop, you should have a basic familiarity with Git on the command line. That is, you should know how to create repositories, how stage and commit files, and how to look at the version history and the state of a Git repository.

     

    You should also have a working installation of Git (version 2.23 or above). Downloads and installation instructions for various operating systems can be found here: https://git-scm.com/downloads.


    core areas:
    • working with branches (git branch)
    • clone a repository (git clone)
    • working with a remote repository (git pull, git push)
    • resolve version conflicts (git merge)
    • inspect who changed what (git blame)

    instructors:
    • Eckhard Kadasch
    • Christian Knüpfer
  • description:

    If you are interested in advanced topics regarding Git, please also register for the second part Collaborative Version Control with Git: An Advanced Workshop (see our catalogue).

    If you have ever written a paper, worked with research data or programmed your own scripts, some of the following problems may sound familiar to you: You have accidentally overwritten something and would like to get it back from an earlier version of your file(s). You find yourself looking through a bunch of older versions wondering what exactly has changed between your current version and the older ones.

    Git helps you avoid these sources of frustration. As a version control system, Git lets you easily save changes in your files to a history and thus helps documenting your work. Using that history, you can see what you changed and when you did it. You can always go back and revert your project to an earlier stage, should you have accidentally deleted text or broke some functionality in your code. Git even lets you work together with others on the same project or even on the same file at the same time, but more on that in the second part of our Git workshop series.

    In this workshop, we introduce you to the fundamental features of Git. You will learn how to use Git in your daily work to keep track of changes in your documents or code. Git has been originally designed for software development, but has quickly found users beyond the software community. So if you consider yourself a non-technical person, this workshop is still for you. The Git basics are easy to learn and easy to apply.


    Requirements: For this workshop you need a working installation of Git. Downloads and installation instructions for various operating systems can be found here: https://git-scm.com/downloads.

     


    core areas:
    • introduction to version control
    • install and config Git (git config)
    • create a repository git init
    • basic Git workflow: change - stage - commit (git add, git commit)
    • inspect status (git status)
    • explore the version history (git history)
    • compare versions (git diff)
    • revert changes (git restore, git reset)
    • use a graphical user interfaces (git gui, GitLab)

    instructors:
    • Eckhard Kadasch
    • Christian Knüpfer
  • description:

    The command line is an interactive interface to your operating system. Instead of controlling your computer by clicking and dragging with the mouse you type in commands on the so-called command line or shell. Controlling your computer by hammering at the keyboard looks really old-fashioned and uncomfortable at the first glance. But if you are working with a lot of data or are programming, using the command line is a very efficient instrument. After some training period you will not want to miss it anymore. Command line interfaces are available on essentially all operating systems, including Linux, Mac OS as well as Microsoft Windows.

    In this workshop we will concentrate on common commands within the Unix/Linux command line, which is also available on Windows.


    core areas:
    Use of the command line to
    • manage files and folders
    • start and control programs
    • search for files and within files
    • manipulate the content of files
    • create small scripts for repeating tasks

    instructors:
    • Frank Löffler
    • Philipp Schäfer
  • description:
    Due to the increasing digitization and datafication in all fields of research, the proper management of research data becomes increasingly important. You spent months on collecting samples and measurements in the field or in the lab? You explored, analysed and interpreted this data and finally published your findings in a scientific journal? Well, then it is time to think about your data again and what to do with it now. Or are you just starting your PhD or your postdoc project and want to make sure not to overlook anything when it comes to obtaining and documenting your measurements? According to the guidelines for safeguarding good scientific practice your results should be replicable and repeatable. Are you aware of the concept of FAIR data, that is mentioned in the research data policies of many funders, institutions and journals? FAIR means that data are findable, accessible, interoperable and re-usable. To ensure this, your data should be well documented, securely stored and available for later reuse. Publishing your research data through a dedicated data journal or repository may help you on this and may also get you an additional publication and further citations. Data publishing and long-term preservation are just two aspects of research data management. This workshop shall help you in determining your data management requirements, no matter at which stage of the project you are. In addition, the course provides you with practical guidance on how to organize, structure, describe and publish your data in order to comply with good scientific practice. Topics of the course:
    • Basic definitions in research data management and the data life cycle
    • Data management plans (DMP)
    • Documentation, data organisation, metadata
    • Storage and back-up
    • Archiving
    • Publication and re-use of research data
    • Legal aspects

    instructors:
    • Cora Assmann
    • Luiz Gadelha
    • Jitendra Gaikwad
  • description:

    Topics of the course:
    • Basic definitions in research data management
    • Data management plans (DMP) and DMP tools
    • Data collection
    • Data processing and analysis
    • Data publishing and sharing
    • Data preservation
    • Data reuse and search
    • Legal aspects (privacy issues) and Licenses
    • Introduction to local and national RDM support facilities


    The course consists of two sessions. The first is on Monday, 07.03.2022 from 09:00 to 12:30, the second on Wednesday, 09.03.2022 from 09:00 to 12:30.

    After registration, you will receive a questionnaire in which you can enter your expectations and questions about the course. One week before the course starts, you will get the access information for the online event.
    instructors:
    • Cora Assmann
    • Luiz Gadelha
  • description:

    Eine Winterschule mit Schwerpunkt auf Geistes-, Rechts- und Sozialwissenschaften

    Programmieren und Schuhe binden haben eine Gemeinsamkeit: Um es zu lernen, muss man es (immer wieder) machen.

    In diesem Kurs werden Sie lernen, Schleifen zu binden. Die Variable liegt dabei in der Ein- und Ausgabe des Fadens. Sie müssen also eine Fallunterscheidung treffen und dann dem Prozessor die entsprechenden Befehle erteilen. Die Schuhe für dieses Übungs-Programm holen Sie sich aus dem Speicher und stellen Sie anschließend wieder dahin zurück. Zum Schluss schreiben Sie die erlernten Schritte als Algorithmus auf und übersetzen ihn in eine alltagstaugliche Sprache, die auch andere interpretieren können.

    Wenn Sie genau wissen, was die hervorgehobenen Worte hinsichtlich des Programmierens bedeuten, brauchen Sie diesen Kurs voraussichtlich nicht.

    Die Winterschule richtet sich an alle Studierenden, die mit Texten arbeiten und die Grundlagen des Programmierens mit Python kennenlernen wollen. Sie ist daher vor allem für Studierende der geistes-, sozial- und rechtswissenschaftlichen Fachrichtungen gedacht, steht aber auch allen anderen Interessierten offen.

    Zunächst erfolgt eine allgemeine Einführung in Grundbegriffe der Programmierung und die Arbeitsweise eines Computers. Im nächsten Teil werden grundlegende Prinzipien der Programmierung – wie etwa Befehle, Variablen, Schleifen und Fallunterscheidungen – vermittelt und erste Gehversuche in Python unternommen. Anschließend lernen Sie, eigene Programme zur Arbeit mit Texten zu entwickeln. Der Kurs findet überwiegend als praktische Übung in der Programmiersprache Python statt – Learning by Doing!

    Die Veranstaltung findet online über Zoom statt. Die Zugangsdaten und weitere Informationen gehen den Teilnehmer*innen spätestens am 16. Februar 2022 zu. Für die Teilnahme gibt es eine Teilnahmebescheinigung.

    Registrierung ist bis spätestens 15. Februar 2022 möglich.

  • description:

    Bash is an interactive interface to your operating systems. Instead of controlling your computer by clicking and dragging with your mouse, you type in commands at the so-called command line, terminal, or shell, of which Bash is the most widespread. Controlling your computer by hammering at the keyboard looks really old-fashioned and uncomfortable at first glance. But if you are working with a lot of data or writing your own computer programs, using the command line is a very efficient tool. After some training period you will not want to miss it anymore. With its root in the Unix operating system, Bash is nowadays available for Linux, macOS as well as Microsoft Windows.

    This is the advanced Bash course. We will learn how to use Bash for

    • searching for files and within files and
    • creating Bash scripts for repeating tasks.

     


    instructors:
    • Christian Knüpfer
    • Philipp Schäfer
  • description:

    Within this workshop we will spend only very little time on what LaTeX can do, but will instead concentrate on you actually making your first steps. This workshop alone will likely not be enough for a beginner to use LaTeX in the future without further help or reference, but it should give a good start and includes pointers where to turn for example use.


    core areas:
    • document structure
    • basic formatting
    • symbols and math
    • images and figures
    • citations

    instructors:
    • Frank Löffler
    • Christian Knüpfer
  • description:
    hopefully enjoy their results.</p>

    Requirements:

    • FSU account (needs to be specified at the registration page)
    • no fear of linux and the command line

    instructors:
    • André Sternbeck
  • description:

    Gephi is a popular and easy-to-use open-source tool for working with networks. In this workshop we will use Gephi to create example networks from data, visualise these networks and perform various analyses on them.


    instructors:
    • Christian Knüpfer
  • description:
    Forschungsdatenmanagement (FDM) umfasst alle Aktivitäten im Umgang mit Forschungsdaten von der Erzeugung, Dokumentation und Aufbewahrung bis zur Publikation und Archivierung. Um die Vielzahl von Aspekten im FDM zu berücksichtigen, sollte bereits vor Projektstart ein Datenmanagementplan (DMP) erstellt werden, der den Umgang mit den im Forschungsprojekt erzeugten Daten dokumentiert und benötigte Ressourcen spezifiziert. Angemessenes Forschungsdatenmanagement und das Erstellen eines DMPs wird von immer mehr Förderorganisationen bei der Beantragung von Projekten vorausgesetzt und ist daher ein wichtiger Bestandteil der Projektplanung. Daneben hilft eine gute Planung aber auch anfallende Kosten von vornherein bei der Beantragung von Mitteln zu berücksichtigen, Unterstützung durch entsprechende Partner sicherzustellen und erforderliche Infrastrukturen aufzubauen um einen effektiven und sicheren Umgang mit den Forschungsdaten während der Projektlaufzeit sicherzustellen. 
    Die Veranstaltung gibt einen Überblick über die Anforderungen der verschiedenen Förderorganisationen bezüglich des FDMs und der Erstellung von DMPs. Außerdem werden der Aufbau und die inhaltlichen Schwerpunkte des DMPs sowie nützliche Unterstützungsmöglichkeiten in Form von Beratungsangeboten und Werkzeugen vorgestellt.

    Der Kurs findet in englischer Sprache statt.
    core areas:
    • Anforderungen verschiedener Förderorganisationen
    • Überblick über Aufbau und Inhalt eines DMPs
    • Nützliche Werkzeuge und Unterstützungsangebote

    Dozenten:
    Roman Gerlach I Kontaktstelle Forschungsdatenmanagement
    Dr. Cora Assmann I Thüringer Kompetenznetzwerk Forschungsdatenmanagement
  • description:
    The Bash is an interactive interface to your operating systems. Instead of controlling your computer by clicking and dragging with the mouse you type in commands on the so-called command line or shell, and the Bash is the most wide-spread. Controlling your computer by hammering at the keyboard looks really old-fashioned and uncomfortable at the first glance. But if you are working with a lot of data or writing your own computer programs using the command line is a very efficient instrument. After some training period you will not want to miss it anymore. With its root in the Unix operating system, the bash is nowadays available for Linux, Mac OS as well as Microsoft Windows. In this workshop we will learn how to use Bash for:
    • managing files and folders,
    • starting and controlling programs,
    • searching for files and within files,
    • manipulating the content of files, and
    • creating Bash scripts for repeating tasks.

    instructors:
    • Christian Knüpfer
    • Philipp Schäfer
  • description:

    Code is everywhere - and scientific research is no exception to this. Whether it is in the STEM disciplines or, more recently, in the growing field of digital humanities or computational social science. Programming allows researchers to handle large amounts of digital data with ease, to automate tasks that would otherwise be time-consuming or even impossible to do, and to explore new approaches. Programming skills allow you to be more autonomous of pre-existing tools and to tailor your workflow to your own needs.

    Python is one of the world's most popular programming languages, not only but also, for scientific programming. Part of its popularity comes from the fact that is rather easy to learn. But most importantly, you can use Python for a broad range of tasks, e.g. text analysis, sequence analysis, mathematical computations, machine learning, visualization, and many more.

    This two-session workshop gives you a practical introduction to the basics of Python. It requires no prior experience with programming. Our goal is to show you some potentials of Python, help you get started with programming and prepare you to take your next steps (on your own or in another course).

    1. The course consists of two sessions. The first is on Tuesday, 04.01.2022 from 10:00 to 12:00, the second on Tuesday, 11.01.2022 from 10:00 to 12:00.
    2. If possible, please also register for the course via Indico: https://indico.rz.uni-jena.de/event/12/.

    instructors:
    • Eckhard Kadasch
    • Yannic Bracke
  • description:

    If you have ever written a paper or worked with research data, some of the following problems may sound familiar to you: You have accidentally overwritten something and would like to get it back from an earlier version of your file(s). You find yourself looking through a bunch of older versions wondering what exactly has changed between your current version and the older version. You and a colleague work on the same files and have to e-mail different versions back and forth. You and a colleague use a shared folder (e.g. Nextcloud, Dropbox) but made edits at the same time, so some of your edits are lost.

    These unpleasant situations can be avoided by using git. As a version control system, git helps you to keep track of your work and to collaborate with other people. It enables easy documentation, the saving and retrieving of earlier versions and working with others in the same directory or even on the same file at the same time.

    Git has been mostly used for software development so far - but this should not put off researchers from non-technical disciplines. The git basics are easy to learn and easy to apply. In this two-session course, you are introduced to the fundamental features of git and learn how to use it in your daily work.

  • description:

    Data security may only be a part of IT security, but even when concentrating of the security of data, the list of available tools is at least as long as the list of possible dangers.

    Within this introductory workshop we will discuss topics starting from social engineering, come to possibly good and not so good password practices, will mention just a bit of the concept of public/private key encryption, before you will learn how to use some of the more common tools using these methods, including password managers, gpg, VPN, email signatures and encryption, and last but not least, your browser.

    Due to the large zoo of tools, it is likely that we do not cover your tools of choice. However, we will focus on those aspects that are similar across different products.

  • description:

    In 1804 the should-be famous engineer Richard Trevithick invented something to connect us all – the very first steam locomotive, a train. While being a huge success at the time, little did he know that he was laying the foundation of a much bigger phenomenon: I am, of course, talking about the hype train. But while back then the train was used to connect people, this train is – and with rising COVID numbers, this aspect is probably more important than ever – all about isolation.

    In this workshop, we are going to put both of these aspects together: we will discuss the hype around Docker that appeared over the past years and why its isolation features are so important to its success.

    With practical exercises and some sprinkles of theoretical background you will develop an understanding of how Docker (and container engines in general) works, what it is used for and how you can take advantage of it. You will learn how to use the Docker tools to run and manage containers, or release your own software to the public. Then, if time permits, we will deep-dive into higher-level tools to orchestrate collections of containers across the boundaries of a single physical machine.

    So, if that sounds interesting to you, hop on the hype train, or it will leave the station without you! All aboard!

  • description:

    Are you working with data organised in spreadsheets? Do you usually spend more time on data cleansing and data quality improvements than on data analysis? And do you want a powerful tool, that is free of charge and runs on every computer, including your local PC? If your answer to these questions is YES, then you should consider registering for this hands-on workshop.

    OpenRefine is a powerful, free and open source tool to clean, correct, codify, and extend your tabular data. Using OpenRefine will save you hours of manual editing and correcting of data.

    In this hands-on workshop we will first introduce what OpenRefine is and what it can do. You will learn how to import your data into OpenRefine, how to find and correct errors in your data, how to transform data, and how to save and export your cleaned data from OpenRefine. Finally, we will point you to additional resources to continue learning after the workshop.

    Participating in this workshop does not require any prior installation nor knowledge of OpenRefine.

  • description:
    Due to the increasing digitization and datafication in all fields of research, the proper management of research data becomes increasingly important. You spent months on collecting samples and measurements in the field or in the lab? You explored, analysed and interpreted this data and finally published your findings in a scientific journal? Well, then it is time to think about your data again and what to do with it now. Or are you just starting your PhD or your postdoc project and want to make sure not to overlook anything when it comes to obtaining and documenting your measurements? According to the guidelines for safeguarding good scientific practice your results should be replicable and repeatable. Are you aware of the concept of FAIR data, that is mentioned in the research data policies of many funders, institutions and journals? FAIR means that data are findable, accessible, interoperable and re-usable. To ensure this, your data should be well documented, securely stored and available for later reuse. Publishing your research data through a dedicated data journal or repository may help you on this and may also get you an additional publication and further citations. Data publishing and long-term preservation are just two aspects of research data management. This workshop shall help you in determining your data management requirements, no matter at which stage of the project you are. In addition, the course provides you with practical guidance on how to organize, structure, describe and publish your data in order to comply with good scientific practice. Topics of the course:
    • Basic definitions in research data management and the data life cycle
    • Data management plans (DMP)
    • Documentation, data organisation, metadata
    • Storage and back-up
    • Archiving
    • Publication and re-use of research data
    • Legal aspects

    Target group: Doctoral Candidates and Postdocs from the Environmental and Earth Sciences (e.g. ecology, biology, geology, geography). This will be an online course using Moodle and live video conferences. We will provide self-study material prior to the two sessions and we expect participants to study the material beforehand and to fulfil the tasks given. During the live sessions there will be exercises, group work, discussions and some presentations.

    Course dates: 25 and 27 October, 9-13 h
    instructors:
    • Cora Assmann
    • Annett Schröter
    • Volker Schwartze
  • description:

    comic by xkcd.

    Spreadsheets, they are loved, hated and for many people indispensable. In science, they are a widely used way to organize data. However, there are many pitfalls and the uncritical handling of spreadsheets can lead to sever misunderstandings or problems, as the loss of data about more than 10,000 COVID-19 cases in the UK shows. But also without such severe consequences, spreadsheets can be a source of annoyance if files that were created by others or just in a different software are not understandable or usable without additional effort. In this workshop, we will introduce possible pitfalls as well as some good practice guidelines when creating spreadsheets.

  • description:
    Forschungsdatenmanagement (FDM) umfasst alle Aktivitäten im Umgang mit Forschungsdaten von der Erzeugung, Dokumentation und Aufbewahrung bis zur Publikation und Archivierung. Um die Vielzahl von Aspekten im FDM zu berücksichtigen, sollte bereits vor Projektstart ein Datenmanagementplan (DMP) erstellt werden, der den Umgang mit den im Forschungsprojekt erzeugten Daten dokumentiert und benötigte Ressourcen spezifiziert. Angemessenes Forschungsdatenmanagement und das Erstellen eines DMPs wird von immer mehr Förderorganisationen bei der Beantragung von Projekten vorausgesetzt und ist daher ein wichtiger Bestandteil der Projektplanung. Daneben hilft eine gute Planung aber auch anfallende Kosten von vornherein bei der Beantragung von Mitteln zu berücksichtigen, Unterstützung durch entsprechende Partner sicherzustellen und erforderliche Infrastrukturen aufzubauen um einen effektiven und sicheren Umgang mit den Forschungsdaten während der Projektlaufzeit sicherzustellen. 

    Die Veranstaltung gibt einen Überblick über die Anforderungen der verschiedenen Förderorganisationen bezüglich des FDMs und der Erstellung von DMPs. Außerdem werden der Aufbau und die inhaltlichen Schwerpunkte des DMPs sowie nützliche Unterstützungsmöglichkeiten in Form von Beratungsangeboten und Werkzeugen vorgestellt.
     
    core areas:
    • Anforderungen verschiedener Förderorganisationen
    • Überblick über Aufbau und Inhalt eines DMPs
    • Nützliche Werkzeuge und Unterstützungsangebote

    Kursleitung:
    Herr Benjamin Sippel

    Dozent:
    Roman Gerlach | Kontaktstelle Foschungsdatenmanagement
     
  • description:


    The two-days online course, offered by the Research Data Management Helpdesk (Uni Jena) and ZB Med (Information Centre for Life Sciences), consists of knowledge transfer with a special focus on biomedical RDM topics as interactive elements.


    core areas:
    • Basic definitions in research data management and the data life cycle
    • Data management plans (DMP) and DMP tools
    • Data collection, processing and analysis, publishing and sharing, preservation, reuse and search
    • Legal aspects and Licenses
    • Introduction to local and national RDM support facilities

    instructors:
    • Cora Assmann
    • Roman Gerlach
    • Volker Schwartze
    • Annett Schröter