Lstm Keras Audio

它默认为从 Keras 配置文件 ~/. Deep Dive Into OCR for Receipt Recognition No matter what you choose, an LSTM or another complex method, there is no silver bullet. Neural Networks with Keras Cookbook: Over 70 recipes leveraging deep learning techniques across image, text, audio, and game bots - Kindle edition by V Kishore Ayyadevara. Creating an LSTM Autoencoder in Keras can be achieved by implementing an Encoder-Decoder LSTM architecture and configuring the model to recreate the input sequence. If we want to stack an LSTM on top of a convolutional layers, we can simply do so, but we need to. Otherwise, output at the final time step will. LSTM has a lot of advantages compared with the simple recurrent neural network but, at the same time, it has four times more parameters because each gate and the information left in g has its own set of parameters V, W, and b. Here is a quick example: from keras. Audio - Đa tình kiếm khách vô tình kiếm - Cổ Long. Implementation of Li-ion Battery RUL Prediction using LSTM. In this post, you will discover how to develop LSTM networks in Python using the Keras deep learning library to address a demonstration time-series prediction problem. In fact, Keras has a way to return xstar as predicted values, using "stateful" flag. Ham radio projects and experiments by AG1LE ag1le http://www. LSTM are generally used to model the sequence data. py Loads pre-trained word embeddings (GloVe embeddings) into a frozen Keras Embedding layer, and uses it to train a text classification model on the 20 Newsgroup dataset. With that property, audio data is basically a sequence of numbers. pyplot as plt. Softmax is being used as the activation function and sparse-categorical cross-entropy on the final dense layer. Tokenizer(). Implement neural network architectures by building them from scratch for multiple real-world applications. docx), PDF File (. 인코딩 모델이 연속적으로 입력을 받아 시계열 데이터의 hidden representation을 찾아내고, 예측 모델이 인코딩 모델의 마지막 상태를 그대로 받아와 인코딩 모델에서 입력으로 주어졌던 데이터들 이후의 값들을. The way Keras LSTM layers work is by taking in a numpy array of 3 dimensions (N, W, F) where N is the number of training sequences, W is the sequence length and F is. models import Sequential from keras. pem [email protected] Later, we will use Recurrent Neural Networks and LSTM to implement chatbot and Machine Translation systems. After completing this post, you will know:. Extracted High Statistical functions features (499Gb) and Low Level Descriptors, frames and action units(AU) data for 7. This would be my first machine learning attempt. How to Develop a Bidirectional LSTM For Sequence. Earlier this week I gave a talk at Localhost, the Recurse Center’s public-facing technical speaker series. Deep CNN-LSTM with Combined Kernels from Multiple Branches for IMDb Review Sentiment Analysis Alec Yenter Abhishek Verma Department of Computer Science Department of Computer Science California State University New Jersey City University Fullerton, California 92831 Jersey City, NJ 07305. The key to this challenge is not only the combination of CNNs and RNNs but also the inclusion of an audio track that can be separately modeled and integrated. models import Sequential layer = LSTM(500) # 500 is hidden size. This course introduces you to Keras and shows you how to create applications with maximum readability. We will train an LSTM Neural Network (implemented in TensorFlow) for Human Activity Recognition (HAR) from accelerometer data. py) and uses it to generate predictions. This blog post is part two in our three-part series of building a Not Santa deep learning classifier (i. Get to grips with the basics of Keras to implement fast and efficient deep-learning models. For each task we show an example dataset and a sample model definition that can be used to train a model from that data. keras使用LSTM输入图层(Nnoe,2)但不起作用 [英] keras Input layer (Nnoe, 2) with LSTM but didn't work 本文翻译自 Howie 查看原文 2017/10/03 220 tensorflow / deep-learning / LSTM / RNN / keras 收藏. The current release is Keras 2. Nevertheless, a lot of work remains before Magenta models are writing complete pieces of music or telling long stories. View the Project on GitHub. I am trying to implement a LSTM based classifier to recognize speech. 2 Related Work. Run the cell below to listen to a snippet of the audio from the training set:. LSTM, first proposed in Long Short-Term Memory. LSTM(~,implementation=2)", then you will get op-kernel graph with two matmul op-kernels, 1 biasAdd op-kernels, 3 element-wise multiplication op-kernels, and several op-kernels regarding non-linear function and matrix manipulation. In Keras I can define the input shape of an LSTM (and GRU) layers by defining the number of training data sets inside my batch (batch_size), the number of time steps and the number of features. And now it works with Python3 and Tensorflow 1. In this walkthrough, a pre-trained resnet-152 model is used as an encoder, and the decoder is an LSTM network. Download it once and read it on your Kindle device, PC, phones or tablets. I'd like to create an audio classification system with Keras that simply determines whether a given sample contains human voice or not. You will train your algorithm on a corpus of Jazz music. It defaults to the image_data_format value found in your Keras config file at ~/. Brno University of Technology, Brno, Czech Republic 4. If you never set it, then it will be "channels_last". py and imdb_cnn_lstm. Lstm scikit learn keyword after analyzing the system lists the list of keywords related and the list of websites with related content, in addition you can see which keywords most interested customers on the this website. Audio will also be important for self-driving cars so they can not only "see" their surroundings but "hear" them as well. 0 and got good results. This course will teach you how to build models for natural language, audio, and other sequence data. But there is a dense layer between lstm output and crf layer and I'd expect that it is calculated in crf. Learning Deep Nearest Neighbor Representations Using Differentiable. I'd like to create an audio classification system with Keras that simply determines whether a given sample contains human voice or not. Unlike standard feedforward neural networks, LSTM has feedback connections. If you never set it, then it will be "channels_last". Since this problem also involves a sequence of similar sorts, an LSTM is a great candidate to be tried. Later, we will use Recurrent Neural Networks and LSTM to implement chatbot and Machine Translation systems. Tip: you can also follow us on Twitter. 94 lines (70. Cats dataset. They are extracted from open source Python projects. Studying the deep neural networks, specifically the LSTM, I decided to follow the idea proposed in this link: Building Speech Dataset for LSTM binary classification to build a classifier. There is plenty of interest in recurrent neural networks (RNNs) for the generation of data that is meaningful, and even fascinating to humans. Keras provides convenient methods for creating Convolutional Neural Networks (CNNs) of 1, 2, or 3 dimensions: Conv1D, Conv2D and Conv3D. Image classification with Keras and deep learning. I'm not entirely sure if what you want is practically doable, but I know that you can downsample the audio to 8kHz and use around 250 timesteps and that will allow an LSTM to learn to sample vocals when trained on just vocals. As you can read in my other post Choosing framework for building Neural Networks (mainly RRN - LSTM), I decided to use Keras framework for this job. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. 1) they do not run. read(file_name) 对读取的音频信息求MFCC(Mel频率倒谱系数) from python_speech_features import mfcc fr. This makes LSTM less efficient in terms of memory and time and also makes the GRU architecture more likely. Klasse ConfigProto. TensorFlow Python 官方参考文档_来自TensorFlow Python,w3cschool。 请从各大安卓应用商店、苹果App Store搜索并下载w3cschool手机客户端. But not all LSTMs are the same as the above. This is computer coding exercise / tutorial and NOT a talk on Deep Learning , what it is or how it works. Just like Keras, it works with either Theano or TensorFlow, which means that you can train your algorithm efficiently either on CPU or GPU. Mostly I see people forming a model with a high-level library e. They are extracted from open source Python projects. A RNN cell is a class that has: Note on using statefulness in RNNs: You can set RNN layers to be 'stateful', which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. Softmax is being used as the activation function and sparse-categorical cross-entropy on the final dense layer. Recurrent Neural Networks are one of the most common Neural Networks used in Natural Language Processing because of its promising results. Also, model’s final activation function is softmax because we want to output which is between 0 and 1. 💥🦎 DEEPLIZARD COMMUNITY RESOURCES 🦎💥 👀. There are times when even after searching for solutions in the right places you face disappointment and can’t find a way out, thats when experts come to rescue as they are experts for a reason!. Get to grips with the basics of Keras to implement fast and efficient deep-learning models. Built custom LSTM RNN using Keras/Tensorflow to detect. GRU taken from open source projects. Kind Klassen. training data would be like this:. Creating an LSTM Autoencoder in Keras can be achieved by implementing an Encoder-Decoder LSTM architecture and configuring the model to recreate the input sequence. Pull requests encouraged!. 0 and got good results. ] You could just think of that as a 1D vector. pretrained_word_embeddings. In this post, I will describe the sentiment analysis task of classifying the Rotten Tomatoes movie reviews dataset. Predicting Stock Performance with Natural Language Deep Learning Overview We recently worked with a financial services partner to develop a model to predict the future stock market performance of public companies in categories where they invest. Deep Learning for Trading: LSTM Basics for Pairs Trading Michelle Lin August 27, 2017 Deep Learning 2 We will explore Long Short-Term Memory Networks (LSTM networks) because this deep learning technique can be helpful in sequential data such as time series. docx - Free download as Word Doc (. By Jason Brownlee on August 30, 2017 in Long Short-Term Memory Networks. Furthermore, keras-rl works with OpenAI Gym out of the box. While the scope of this code pattern is limited to an introduction on text generation, it provides a strong foundation to learning how to build a. Audio classification with Keras: Looking closer at the non-deep learning parts Sometimes, deep learning is seen - and welcomed - as a way to avoid laborious preprocessing of data. Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. Neural Networks with Keras Cookbook: Over 70 recipes leveraging deep learning techniques across image, text, audio, and game bots [V Kishore Ayyadevara] on Amazon. pdf), Text File (. Artificial Neural Networks have disrupted several. Deep Learning with Python and Keras 4. Lstm scikit learn keyword after analyzing the system lists the list of keywords related and the list of websites with related content, in addition you can see which keywords most interested customers on the this website. A brief run through of your education and research/work history prior to your current post at LSTM My undergraduate degree in Biology and Geography at Keele University gave me my initial exposure to parasitology and vector biology. MultimodalDeepLearning / ver_keras / mdl_LSTM_audio. A RNN cell is a class that has: Note on using statefulness in RNNs: You can set RNN layers to be 'stateful', which means that the states computed for the samples in one batch will be reused as initial states for the samples in the next batch. Example of using Keras to implement a 1D convolutional neural network (CNN) for timeseries prediction. *FREE* shipping on qualifying offers. For example, a character’s name, used at the beginning. Convolutional LSTM; Deep Dream; Image OCR; Bidirectional LSTM; 1D CNN for text classification; Sentiment classification CNN-LSTM; Fasttext for text classification; Sentiment classification LSTM; Sequence to sequence - training; Sequence to sequence - prediction; Stateful LSTM; LSTM for text generation; Auxiliary Classifier GAN. py Restores a character-level sequence to sequence model from disk (saved by lstm_seq2seq. For up-to-date code, switch over to Panotti. SafeSense is an IoT/ML solution that recognizes audio signatures in a public environment, to provide proactive care & enhance safety. While there is still feature and performance work remaining to be done, we appreciate early feedback that would help us bake Keras support. It achieves reasonable results and is able to understand someone with just a computer-vision solution, with no audio present. After completing this tutorial, you will know: How to develop a small contrived and configurable sequence classification problem. In this video, we discuss how to prepare and preprocess numerical data that will be used to train a model on in Keras. It appears that rather than using the output of the encoder as an input for classification, they chose to seed a standalone LSTM classifier with the weights of the encoder model directly. The trained model will be exported/saved and added to an Android app. We will use the LSTM network to classify the MNIST data of handwritten digits. pdf), Text File (. Hi, having issues with CoreML and Keras for sequence data which is limiting what can be achieved for audio tasks. pem [email protected] How we built it. We apply the tan(h) layer to cell state to regulate the values and multiply with output(O(t)). GRU implementation in Keras. preprocessing. C-LSTM is able to capture both local features of phrases as well as global and temporal sentence semantics. Long short-term memory The one-step-ahead prediction of the financial time series requires not only the latest data, but also the previous data. Tip: you can also follow us on Twitter. SimpleRNN is the recurrent neural network layer described above. A Comprehensive guide to Fine-tuning Deep Learning Models in Keras (Part I) October 3, 2016 In this post, I am going to give a comprehensive overview on the practice of fine-tuning, which is a common practice in Deep Learning. Additionally, we will perform text analysis using word vector based techniques. It was developed with a focus on enabling fast experimentation. I need a linux based application that is able to learn or be trained on speakers based on audio files provided. How to Reshape Input Data for Long Short-Term Memory Networks in Keras. Normal Keras LSTM is implemented with several op-kernels. Topics covered are DNN, CNN, RNN, LSTM & Deep Auto encoders Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. GRU implementation in Keras. Bidirectional Layer is helpful as it helps in understanding the sentence from start to end and also from end to start. This is the second part in three part. Classify cancer using simulated data (Logistic Regression) CNTK 101:Logistic Regression with NumPy. The main idea is to introduce memory blocks which decide the degree to which LSTM units keep the previous state and memorize the extracted features of. callbacks from keras. Phil Ayres. edu Matt Vitelli [email protected] You can vote up the examples you like or vote down the ones you don't like. Toronto, M5S 3G4, Canada ABSTRACT Deep Bidirectional LSTM (DBLSTM) recurrent neural net-works have recently been shown to give state-of-the-art per-. Machinelearningmastery. [email protected] Embedding(). The examples covered in this post will serve as a template/starting point for building your own deep learning APIs — you will be able to extend the code and customize it based on how scalable and robust your API endpoint needs to be. Input and output data is expected to have shape (lats, lons, times). PDF | Recurrent neural networks (RNNs) have proved effective at one di- mensional sequence learning tasks, such as speech and online handwriting recog- nition. I have extracted 13 mfcc and each file contain 99 frames. recurrent import LSTM from keras. We will use Keras to visualize inputs that maximize the activation of the filters in different layers of the VGG16 architecture, trained on ImageNet. More explicitly, we use Long Short Term Memory Networks (LSTM) with (and without) a soft attention mechanism [4] to sequences of audio signals in order to classify songs by genre. keras/keras. Later, we will use Recurrent Neural Networks and LSTM to implement chatbot and Machine Translation systems. 💥🦎 DEEPLIZARD COMMUNITY RESOURCES 🦎💥 👀. Last login: Sun Oct 2 10:22:27 on ttys002 jhav:tensorflow jhave$ ssh -i keras-keypair-aws. Although the scripts convert using coremltools (v0. 인코딩 모델이 연속적으로 입력을 받아 시계열 데이터의 hidden representation을 찾아내고, 예측 모델이 인코딩 모델의 마지막 상태를 그대로 받아와 인코딩 모델에서 입력으로 주어졌던 데이터들 이후의 값들을. If a GPU is available and all the arguments to the layer meet the requirement of the. Although Keras has supported TensorFlow as a runtime backend since December 2015, the Keras API had so far been kept separate from the TensorFlow codebase. Just ran your code in Keras 1. SGD; Class tf. In this post, you will discover the CNN LSTM architecture for sequence prediction. Creating A Text Generator Using Recurrent Neural Network 14 minute read Hello guys, it's been another while since my last post, and I hope you're all doing well with your own projects. I am trying to get started learning about RNNs and I'm using Keras. And now it works with Python3 and Tensorflow 1. , a deep learning model that can recognize if Santa Claus is in an image or not):. At first we need to choose some software to work with neural networks. Improved Variational Autoencoders for Text Modeling using Dilated Convolutions (ICML’17) One of the reasons that VAE with LSTM as a decoder is less effective than LSTM language model due to the LSTM decoder ignores conditioning information from the encoder. I understand the basic premise of vanilla RNN and LSTM layers, but I'm having trouble understanding a certain technical point for training. train(sequence_length) or whatever the equivalent is. By comparing the results to those obtained by a Support Vector Machine (SVM)…. Neural networks are powerful for pattern classification and are at the base of deep learning techniques. The key to this challenge is not only the combination of CNNs and RNNs but also the inclusion of an audio track that can be separately modeled and integrated. LSTM Cell Traffic - Free download as PDF File (. Short-Term Residential Load Forecasting based on LSTM Recurrent Neural Network Article (PDF Available) in IEEE Transactions on Smart Grid PP(99):1-1 · September 2017 with 3,196 Reads. However, there are cases where preprocessing of sorts does not only help improve prediction, but constitutes a fascinating topic in itself. layers import Dense from keras. After reading this post you will know: How to develop an LSTM model for a sequence classification problem. The main idea is to combine classic signal processing with deep learning to create a real-time noise suppression algorithm that's small and fast. For each task we show an example dataset and a sample model definition that can be used to train a model from that data. keras / examples / lstm_text_generation. The following are code examples for showing how to use keras. topology import Layer from keras import initializers, regularizers, constraints. Through a combination of advanced training techniques and neural. Studying the deep neural networks, specifically the LSTM, I decided to follow the idea proposed in this link: Building Speech Dataset for LSTM binary classification to build a classifier. Experience on hardware architecture design is a plus. A plain LSTM has an internal memory cell that can learn long term dependencies of sequential data. There are times when even after searching for solutions in the right places you face disappointment and can’t find a way out, thats when experts come to rescue as they are experts for a reason!. Sequence Classification with LSTM RNN in Python with Keras In this project, we are going to work on Sequence to Sequence Prediction using IMDB Movie Review Dataset using Keras in Python. Through a combination of advanced training techniques and neural network architectural components, it is now possible to create neural networks that can handle tabular data, images, text, and audio as both input and output. It takes as input 3D tensors with shape (samples, time, features) and returns similarly shaped 3D tensors. We are happy to bring CNTK as a back end for Keras as a beta release to our fans asking for this feature. 简介 起步 下载及安装 基本用法. Clone via HTTPS Clone with Git or checkout with SVN using the repository’s web address. 这里我们将要使用Keras搭建LSTM. View Hasika Mahtta’s profile on LinkedIn, the world's largest professional community. But not all LSTMs are the same as the above. In today’s post, we will learn how to recognize text in images using an open source tool called Tesseract and OpenCV. I haven't found too much info floating around about the actual, specific mechanics of how LSTM training is done. An LSTM for time-series classification. This is the case in this example script that shows how to teach a RNN to learn to add numbers, encoded as character strings:. 94 lines (70. com,1999:blog. We can either make the model predict or guess the sentences for us and correct the. Once fit, the encoder part of the model can be used to encode or compress sequence data that in turn may be used in data visualizations or as a feature. Long short-term memory The one-step-ahead prediction of the financial time series requires not only the latest data, but also the previous data. Some methods are hard to use and not always useful. That is what I meant with output dimension (I dont know how you would call it otherwise) $\endgroup$ - Luca Thiede Mar 26 '17 at 13:44. Keras: Convolutional LSTM Stacking recurrent layers on top of convolutional layers can be used to generate sequential output (like text) from structured input (like images or audio) [ 1 ]. Keras - What is the best method for classification of time Datascience. My Y is (N_signals, 1500, 2) and I'm working with keras. You can vote up the examples you like or vote down the ones you don't like. In fact, it seems like almost every paper involving LSTMs uses a slightly different version. The CNN Long Short-Term Memory Network or CNN LSTM for short is an LSTM architecture specifically designed for sequence prediction problems with spatial inputs, like images or videos. These are some examples. We will use the Speech Commands dataset which consists of 65. Composing Music with LSTM -- Blues Improvisation Here are some multimedia files related to my LSTM music composition project. - We update the _keras_history of the output tensor(s) with the current layer. Let's hand-code an LSTM network. We will train an LSTM Neural Network (implemented in TensorFlow) for Human Activity Recognition (HAR) from accelerometer data. Introduction In this tutorial we will build a deep learning model to classify words. How to Develop a Bidirectional LSTM For Sequence. Convolutional LSTM; Deep Dream; Image OCR; Bidirectional LSTM; 1D CNN for text classification; Sentiment classification CNN-LSTM; Fasttext for text classification; Sentiment classification LSTM; Sequence to sequence - training; Sequence to sequence - prediction; Stateful LSTM; LSTM for text generation; Auxiliary Classifier GAN. Audio generation with LSTM. If you never set it, then it will be "channels_last". In this post, you will discover how to develop LSTM networks in Python using the Keras deep learning library to address a demonstration time-series prediction problem. A Long short-term memory (LSTM) is a type of Recurrent Neural Network specially designed to prevent the neural network output for a given input from either decaying or exploding as it cycles through the feedback loops. The Unreasonable Effectiveness of Recurrent Neural Networks. recurrent import LSTM from keras. ing recurrent layers over the inputs, and then a fi-. The GRU comprises of the reset gate and the update gate instead of the input, output and forget gate of the LSTM. autoencoder lstm | autoencoder lstm keras | autoencoder lstm | autoencoder lstm github | deeplearning4j autoencoder lstm | bidirectional lstm autoencoder | lstm. - If necessary, we build the layer to match the shape of the input(s). I still remember when I trained my first recurrent network for Image Captioning. Kind Klassen. Benchmarking CNTK on Keras: is it Better at Deep Learning than TensorFlow 29 Aug 2017. This would be my first machine learning attempt. Neural networks are a different breed of models compared to the supervised machine learning algorithms. This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. Abstract: This paper develops a model that addresses sentence embedding, a hot topic in current natural language processing research, using recurrent neural networks with Long Short-Term Memory (LSTM) cells. 02% without using any additional emotion. They are extracted from open source Python projects. Short-Term Residential Load Forecasting based on LSTM Recurrent Neural Network Article (PDF Available) in IEEE Transactions on Smart Grid PP(99):1-1 · September 2017 with 3,196 Reads. edited Oct 14 '18 at 1:15. lstm_seq2seq_restore. Image classification with Keras and deep learning. Example of using Keras to implement a 1D convolutional neural network (CNN) for timeseries prediction. Combined with an audio module, our system achieved a recognition accuracy of 59. models import Sequential layer = LSTM(500) # 500 is hidden size. Douglas Daseeco. In this post, you will discover how to develop LSTM networks in Python using the Keras deep learning library to address a demonstration time-series prediction problem. 📕 Get a FREE 30-day Audible trial and 2 FREE audio books using deeplizard’s link:. What I’ve described so far is a pretty normal LSTM. I then replaced the LSTM layer with a Dense layer just to see the effect (I did remove the Return=False argument). Farneth2, Randall S. You can vote up the examples you like or vote down the ones you don't like. cell: A RNN cell instance or a list of RNN cell instances. These are dominating and in a way invading human. 1 Mel frequency cepstral coe cients (MFCC) MFCC features are commonly used for speech recognition, music genre classi cation and audio signal similarity measurement. py, both are approaches used for finding out the spatiotemporal pattern in a dataset which has both [like video or audio file, I assume]. docx - Free download as Word Doc (. Figure 2: Dog barking in different domains. For your search query LSTM MP3 we have found 1000000 songs matching your query but showing only top 10 results. It was a very time taking job to understand the raw codes from the keras examples. We will use tfdatasets to handle data IO and pre-processing, and Keras to build and train the model. I have extracted 13 mfcc and each file contain 99 frames. Neural networks are a different breed of models compared to the supervised machine learning algorithms. We will use the LSTM network to classify the MNIST data of handwritten digits. LSTM's enable RNN's to remember their inputs over a long period of time so that RNN become capable of learning long-term dependencies. pyplot as plt. For example; in a 2 second audio file, we extract values at half a second. I have used 3 LSTM layer and 2 Dense Layer. Music composition shares a lot with image processing; applying the GAN with an LSTM seems like a good way to improve. Through a combination of advanced training techniques and neural. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. HYBRID SPEECH RECOGNITION WITH DEEP BIDIRECTIONAL LSTM Alex Graves, Navdeep Jaitly and Abdel-rahman Mohamed University of Toronto Department of Computer Science 6 King’s College Rd. 1 - Dataset. In this post, I will describe the sentiment analysis task of classifying the Rotten Tomatoes movie reviews dataset. LSTM Future Predictor Model는 인코딩 모델과 디코딩 모델로 이루어집니다. edu Matt Vitelli [email protected] Time Series Prediction with LSTM Recurrent Neural Networks in Python. View the Project on GitHub. pdf), Text File (. Finally, you will learn about transcribing images, audio, and generating captions and also use Deep Q-learning to build an agent that plays Space Invaders game. Keras: Convolutional LSTM Stacking recurrent layers on top of convolutional layers can be used to generate sequential output (like text) from structured input (like images or audio) [ 1 ]. Stacking multiple LSTMs is likely to capture more variation in the data and thus potentially a better accuracy. Although the scripts convert using coremltools (v0. com ### Daniel Falbel (@Curso-R e