Río de Janeiro, Río de Janeiro, Brasil
4 mil seguidores Más de 500 os

Unirse para ver el perfil

Acerca de

Engineering executive, with experience leading global product and engineering…

Experiencia y educación

  • Uber

Mira la experiencia completa de Rafael

Mira su cargo, antigüedad y más

o

Al hacer clic en «Continuar» para unirte o iniciar sesión, aceptas las Condiciones de uso, la Política de privacidad y la Política de cookies de LinkedIn.

Licencias y certificaciones

Publicaciones

  • An Adaptive Implicit Model for Short Clips Recommendations

    2nd Workshop on Recommendation Systems for TELEVISION and ONLINE VIDEO - ACM RecSys 2015

    To generate relevant video recommendations using item-item collaborative filtering, it is key to understand how interesting is a specific video to a specific . One approach to infer this level of interest is through an implicit , where s do not evaluate the content directly. There are several ways to capture this implicit , such as a basic binary if played or not the offered content. In this paper we propose an adaptive implicit model used to…

    To generate relevant video recommendations using item-item collaborative filtering, it is key to understand how interesting is a specific video to a specific . One approach to infer this level of interest is through an implicit , where s do not evaluate the content directly. There are several ways to capture this implicit , such as a basic binary if played or not the offered content. In this paper we propose an adaptive implicit model used to produce video recommendations using an item-item collaborative filtering approach. This model innovates by considering that each video has a different content and the level of interest for each video can be completely different. Based on this, the implicit model automatically adapts itself in order to reflect real behavior, and consider this behavior to infer the level of interest of an in a specific video. To validate the proposed model, it was exhaustive tested in a real video portal. These tests show good results.

    Otros autores
    Ver publicación
  • Cloud Based Real-Time Collaborative Filtering for Item-Item Recommendations

    Computers in Industry - Elsevier

    In this paper, we describe a large scale implementation of a video recommendation system in use by the largest media group in Latin America. Taking advantage of existing recommendation system techniques, the proposed architecture goes beyond the state of the art by making use of a commercial cloud computing platform to provide scalability, reduce costs and, more importantly, response times. We discuss the implementation in detail, in particular the design of cloud based features. We also…

    In this paper, we describe a large scale implementation of a video recommendation system in use by the largest media group in Latin America. Taking advantage of existing recommendation system techniques, the proposed architecture goes beyond the state of the art by making use of a commercial cloud computing platform to provide scalability, reduce costs and, more importantly, response times. We discuss the implementation in detail, in particular the design of cloud based features. We also provide a comprehensive generalization of the architecture that allows its application in other settings.

    Otros autores
    Ver publicación
  • How good are Classic Distributed Algorithms for Replica Management in Cloud Services?

    Almost every cloud platform has some dependable core services based on replicated data/state and which require strong consistency among the replicas. As these replicas may be hosted in geographically distributed data-centers, cloud platform’s consistency preserving algorithms are challenged by unpredictable communication latencies and temporary network partitions. On the other hand, in the last three decades, much research on algorithms for distributed replica consistency has been done, and…

    Almost every cloud platform has some dependable core services based on replicated data/state and which require strong consistency among the replicas. As these replicas may be hosted in geographically distributed data-centers, cloud platform’s consistency preserving algorithms are challenged by unpredictable communication latencies and temporary network partitions. On the other hand, in the last three decades, much research on algorithms for distributed replica consistency has been done, and some have been successfully incorporated into practical systems. In this work we analyze the specific consistency requirements and the common distributed deployments of cloud platforms, and discuss to which extent classic distributed algorithm design has contributed to the solution of the internet-cloud problems.

    Otros autores
    Ver publicación
  • Cloud Based Item-Item Recommendations

    Cloud Futures 2012 - Microsoft Research

    In this paper we argue that the combination of collaborative filtering techniques, particularly for item-item recommendations, with emergent cloud computing technology can drastically improve algorithm efficiency, particularly in situations where the number of items and s scales up to several million objects. We introduce a real-time item-item recommendation architecture, which rationalizes the use of resources by exploring on-demand computing. The proposed architecture provides a real-time…

    In this paper we argue that the combination of collaborative filtering techniques, particularly for item-item recommendations, with emergent cloud computing technology can drastically improve algorithm efficiency, particularly in situations where the number of items and s scales up to several million objects. We introduce a real-time item-item recommendation architecture, which rationalizes the use of resources by exploring on-demand computing. The proposed architecture provides a real-time solution for computing online item similarity, without having to resort to either model simplification or the use of input data sampling. We present results from a real life case study to show that it is possible to greatly reduce recommendation times (and overall costs) by using dynamic resource provisioning in the Cloud. Finally, we also discuss potential research opportunities that arise from this paradigm shift.

    Otros autores
    Ver publicación
  • Video Processing in the Cloud

    Springer

    As computer systems evolve, the volume of data to be processed increases significantly, either as a consequence of the expanding amount of available information, or due to the possibility of performing highly complex operations that were not feasible in the past. Nevertheless, tasks that depend on the manipulation of large amounts of information are still performed at large computational cost, i.e., either the processing time will be large, or they will require intensive use of computer…

    As computer systems evolve, the volume of data to be processed increases significantly, either as a consequence of the expanding amount of available information, or due to the possibility of performing highly complex operations that were not feasible in the past. Nevertheless, tasks that depend on the manipulation of large amounts of information are still performed at large computational cost, i.e., either the processing time will be large, or they will require intensive use of computer resources. In this scenario, the efficient use of available computational resources is paramount, and creates a demand for systems that can optimize the use of resources in relation to the amount of data to be processed. This problem becomes increasingly critical when the volume of information to be processed is variable, i.e., there is a seasonal variation of demand. Such demand variations are caused by a variety of factors, such as an unanticipated burst of client requests, a time-critical simulation, or high volumes of simultaneous video s, e.g. as a consequence of a public contest. In these cases, there are moments when the demand is very low (resources are almost idle) while, conversely, at other moments, the processing demand exceeds the resources capacity. Moreover, from an economical perspective, seasonal demands do not justify a massive investment in infrastructure, just to provide enough computing power for peak situations. In this light, the ability to build adaptive systems, capable of using on demand resources provided by Cloud Computing infrastructures is very attractive.

    Otros autores
    Ver publicación
  • A Cloud Based Architecture for Improving Video Compression Time Efficiency: The Split&Merge Approach

    IEEE Data Compression Conference (DCC 2011)

    In this paper we argue that the combination of mature video compressing techniques, in particular those of the H.26* family, to emergent Cloud Computing technology can drastically improve overall time efficiency of the compression process. We introduce the Split & Merge architecture, for high performance video processing, a generalization of the MapReduce paradigm, that rationalizes the use of resources by exploring on-demand computing. We present experiment results and show that, independently…

    In this paper we argue that the combination of mature video compressing techniques, in particular those of the H.26* family, to emergent Cloud Computing technology can drastically improve overall time efficiency of the compression process. We introduce the Split & Merge architecture, for high performance video processing, a generalization of the MapReduce paradigm, that rationalizes the use of resources by exploring on-demand computing. We present experiment results and show that, independently of the size of the input, it is possible to greatly reduce video encoding times by using dynamic resource provisioning in the Cloud. At the end of the paper we discuss potential research opportunities that arise from this paradigm shift.

    Otros autores
    Ver publicación
  • An Architecture for Distributed High Performance Video Processing in the Cloud

    Cloud Computing (CLOUD), 2010 IEEE 3rd International Conference on

    Video processing applications are notably data intense, time, and resource consuming. Upfront infrastructure investment is usually high, specially when dealing with applications where time-to- market is a crucial requirement, e.g., breaking news and journalism. Such infrastructures are often inefficient, because due to demand variations, resources may end up idle a good portion of the time. In this paper, we propose the Split&Merge architecture for high performance video processing, a…

    Video processing applications are notably data intense, time, and resource consuming. Upfront infrastructure investment is usually high, specially when dealing with applications where time-to- market is a crucial requirement, e.g., breaking news and journalism. Such infrastructures are often inefficient, because due to demand variations, resources may end up idle a good portion of the time. In this paper, we propose the Split&Merge architecture for high performance video processing, a generalization of the MapReduce paradigm that rationalizes the use of resources by exploring on demand computing. To illustrate the approach, we discuss an implementation of the Split&Merge architecture, that reduces video encoding times to fixed duration, independently of the input size of the video file, by using dynamic resource provisioning in the Cloud

    Otros autores
    Ver publicación
  • When TV Dies, Will It Go to the Cloud?

    IEEE Computer Magazine - April Issue

    The paper mentions that coupled with the expected growth in bandwidth through the next decade, cloud computing will change the face of TV. The Internet brought the potential to completely reinvent TV. First, it let s see what they wanted, when they wanted, while suppressing the need for additional hardware. Second, and more importantly, the Net removes the barrier that separates producers, distributors, and consumers.Third, the Internet allows mixing and matching of multisource content. It…

    The paper mentions that coupled with the expected growth in bandwidth through the next decade, cloud computing will change the face of TV. The Internet brought the potential to completely reinvent TV. First, it let s see what they wanted, when they wanted, while suppressing the need for additional hardware. Second, and more importantly, the Net removes the barrier that separates producers, distributors, and consumers.Third, the Internet allows mixing and matching of multisource content. It has become commonplace for networks to mix their own footage with -generated content to provide a more holistic experience.From a technical viewpoint, huge challenges remain, however, including the ability to process, index, store, and distribute nearly limitless amounts of data. This is why cloud computing will play a major role in redefining TV in the next few years.

    Otros autores
    Ver publicación
  • Cloud TV

    Cloud Futures - Microsoft Research

    Production challenges—The exponential growth of -Generated-Content (UGG) makes it virtually impossible to estimate the volume of resources needed to run open submission systems, in particular those whose usage is seasonal. We introduce a cloud-based architecture that addresses this problem, and demonstrate an instance application that runs the registration system for candidates who wish to participate in the Brazilian Big Brother reality TV show .
    Distribution challenges—The…

    Production challenges—The exponential growth of -Generated-Content (UGG) makes it virtually impossible to estimate the volume of resources needed to run open submission systems, in particular those whose usage is seasonal. We introduce a cloud-based architecture that addresses this problem, and demonstrate an instance application that runs the registration system for candidates who wish to participate in the Brazilian Big Brother reality TV show .
    Distribution challenges—The proliferation of mobile device types is pushing the demand for processing services to unprecedented levels. Every object is processed several times, to secure encoding standard compatibility with different devices (PC’s, mobile phones, media centers, game consoles,etc), which different codes (H.264 Baseline, Main and High Profiles, H.263,etc), as well as compression rates, to adapt to local storage and bandwidth restrictions. We introduce a generalization of the Map-Reduce architecture to tackle this problem, and demonstrate a running private-cloud implementation that reduces HD video encoding times dramatically – to pre-fixed values, independently of content duration.

    Otros autores
    Ver publicación

Recomendaciones recibidas

Ver el perfil completo de Rafael

  • Descubrir a quién conocéis en común
  • Conseguir una presentación
  • ar con Rafael directamente
Unirse para ver el perfil completo

Perfiles similares

Otras personas con el nombre de Rafael Pereira en Brasil

Añade nuevas aptitudes con estos cursos