30 gru 2013

Postdocs and PhD studentships are Tampere University of Technology, Tampere, Finland

Open positions at the Multimedia Research Group (the MUVIS Team, http://muvis.cs.tut.fi) at the Department of Signal Processing, Tampere University of Technology (TUT), Finland

2 POSTDOCS
1 PHD STUDENTSHIP

Tampere University of Technology (TUT) is an active scientific community of 2,000 employees and more than 10,000 students. The University operates in the form of a foundation and has a long-standing tradition of collaboration with research institutions and business life. Many of the fields of research and study represented play a key role in addressing global challenges. International collaboration is an inherent part of all the University's activities.

The Department of Signal Processing belongs to the Faculty of Computing and Electrical Engineering. Signal processing has been chosen as one of the top strategic fields of research in TUT and was the host of two Academy of Finland Centers of Excellence. Nearly 200 faculty, staff and researchers work at the Department and nearly half are international.

Job description:
The positions are for scientific research in externally funded projects related to “big” learning in Big Data. The Postdocs are expected to perform independent research, collaborate with team members and supervise PhD and MSc students in addition to project management duties.

Doctoral Students will work toward a dissertation as a member of the research team and they will be supervised by the senior members of the group.

Requirements:
We are looking for creative and highly motivated researchers. Suitable disciplines for all open positions include Signal Processing, Artificial Intelligence, Machine Learning, Computer Vision and other related areas. Fluent written and spoken English and solid programming (Matlab/C/C++) skills are required.  Excellent skills in computer vision, machine learning (deep learning and graph theory) and content-based multimedia retrieval is essential. Java and web programing skill is valuable.
 
Applicants to the postdoc position should have completed (or close to) the PhD degree. Candidates applying for the doctoral student position must hold a MSc degree in a related engineering field and are expected to enroll as a PhD student at TUT.

Salary:
The salary will be set in accordance with the University Salary System. Starting Doctoral Students receive a monthly salary of 2200 euros and Postdoctoral Researcher 3200 euros.

For more information, contact
Academy Professor Moncef Gabbouj, moncef.gabbouj AT tut.fi (http://www.cs.tut.fi/~moncef/open-positions.htm )

How to apply:
Applications can be submitted in PDF format by email to moncef.gabbouj AT tut.fi. The positions will remain open until filled. The target starting date is 1 February 2014 (or earlier).

The (preferably single document) application should include the following items:
- Letter of motivation
- CV (including names and contact details of at least two references, one of which is preferably the MSc or PhD thesis supervisor)
- Copy of MSc/PhD degree certificate
- List of publications
- Research abstract

24 gru 2013

Przewidywania 25 lat temu

Dziennik telewizyjny sprzed 25 lat zdumiewająco trafnie przewidział rozwój komputerów, także na temat rozpoznawania mowy:



10 gru 2013

Kolejny ciekawy artykuł ...

Polska wersja Interaction Analayser

"Firma Interactive Intelligence wprowadziła do oferty polską wersję językową aplikacji do analizy mowy w czasie rzeczywistym. Oprogramowanie Interaction Analyser (przeznaczone przede wszystkim dla centrów obsługi klientów, ale również dla średnich i dużych przedsiębiorstw) pozwala centrom obsługi klienta monitorować pracę konsultantów i oceniać nagrania." - Computerworld

6 gru 2013

O głosie Koala

Kolejny ciekawy artykuł na BBC o głosie, tym razem zwierząt - link. Podobno Koala mają dodatkową parę strun głosowych. 

3 gru 2013

AKUSTYKA W JĘZYKOZNAWSTWIE - JĘZYKOZNAWSTWO W AKUSTYCE

"Szanowni Państwo,

serdecznie zapraszamy do udziału w organizowanej przez Zakład Historii Języka Polskiego i Dialektologii Wydziału Polonistyki Uniwersytetu Warszawskiego konferencji naukowej:

Konferencja odbędzie się w dniach 26-27 września 2014 r. na Wydziale Polonistyki UW (ul. Krakowskie Przedmieście 26/28). Szczegółowe informacje znajdują się w załącznikach oraz na stronie www.zhjpid.uw.edu.pl (zakładka: Konferencja Akustyka w językoznawstwie...").

Z wyrazami szacunku

dr Justyna Garczyńska
dr Monika Kresa"

29 lis 2013

Mgr inż. Sandra Imiela

W dniu dzisiejszym odbyła się obrona pracy "KOMPUTEROWA GRA FABULARNA OPARTA NA SYSTEMACH DIALOGOWYCH", której byłem promotorem i na którą niechlubnie się spóźniłem. Autorce - mgr inż. Sandrze Imieli serdecznie gratuluję i życzę powodzenia.

28 lis 2013

Przedłużenie terminu przyjmowania zgłoszeń na PVC

Na Pacific Voice Conference napłynęło już około 60 zgłoszeń. Pomimo tego zdecydowaliśmy się przedłużyć termin przyjmowania streszczeń do 20 grudnia.

www.dsp.agh.edu.pl/pvc

19 lis 2013

Sarmata 2.0

Nowa wersja Sarmaty zrealizowana przez Techmo sp. z o.o. osiągnęła średnią skuteczność 97,7% w testach na przeszło 5 000 nagranych wypowiedziach. W 99,6% przypadków prawidłowa hipoteza była w pierwszej trójce listy najsilniejszych hipotez. W grudniu system będzie testowany w ADESCOM Polska sp. z o.o.

17 lis 2013

Balbus Speech

Jack McDermott opracował dwie aplikacje, pomocne przy leczeniu zaburzeń mowy. Speech 4 Good oraz Fluently oparto na sprawdzonych metodach terapeutycznych. Do tej pory pobrano je już ponad 10 tys. razy. Aplikacje kosztują 15  USD i 10 USD, podczas gdy według inc.com cena detaliczna innych narzędzi tego typu idzie w tysiące dolarów.

(cytat za firma.pb.pl)

15 lis 2013

Kanał DSP AGH na youtubie


Zapraszamy do subskrybowania naszego kanału youtube.

 P.S. Co zrobić żeby mieć ładny link na youtubie typu youtube.com/dsp_agh?

11 lis 2013

Przykłady ofert pracy w dziedzinie technologii mowy

Job Title: Speech Scientist

Organization: Synchronoss VoiceCare R&D

Office: Bridgewater, NJ preferred

Primary Mission: Overall responsibility for maintaining & improving the speech recognition performance & accuracy of one or more existing VoiceCare solutions, and to create new grammars for new solutions or new functionalities in existing solutions.
Key Responsibilities:
Work closely with Product/Project Managers, Architects, Business Analysts, VUI Designers, Software/Systems Engineers, Customers, & Partners to understand and define the product roadmap & requirements for VoiceCare solutions from a speech recognition perspective
End-to-end grammar development and performance of existing & new VoiceCare solutions
Streamlining our existing grammars to maximize and promote reusability and common code
Innovating and improving our continuous application grammar tuning process – especially around Natural Language Grammars & Statistical Language Models
Maintaining, Supporting, Enhancing, & Improving our grammars
Learning to first use all existing Speech Science tools, and then helping to maintain and improve our tools
As a key member of a cross-functional team, monitor, analyze, & tune our VoiceCare products on a regular basis to improve performance against 3 key dimensions:
Application Effectiveness (how well is the application performing against the business objectives, for example, maximizing call-completion-rate)
Caller Experience (how good is the overall caller’s experience in using the application)
Speech Recognition Accuracy (how well does the application understand what the callers are saying)
Identifying, Defining, Improving, and Applying best-practices in grammar design, development, & tuning
Delivering high-quality application/grammar releases to QA and into production with zero or minimal defects
Providing constructive feedback and suggestions for improvements on our products, platforms, tools, & business processes to the appropriate departments
Working professionally, productively, and effectively with all cross-functional team-members both within the company and outside the company (e.g. customers, partners, & vendors)
Staying current on the latest industry trends, technologies, practices, & innovation in speech recognition technology – introducing relevant ideas into the company as appropriate
Representing the company as a Speech Science Subject Matter Expert in customer facing meetings, conferences, trade-shows, & seminars as appropriate
Promoting the company’s brand & presence in the Speech Science community by publishing technical papers and articles as appropriate
Job Requirements:
Minimum: A Bachelors of Science Degree in Computer Science, Machine Learning, Linguistics, or a related field
Preferred: Masters or PhD
4 to 8 years of direct experience in speech science research and development
Experience in developing large-scale complex grammars with very large vocabulary and/or statistical language models
Experience working with industry-leading Speech Recognition Engines (e.g. Nuance, AT&T, Microsoft, etc.)
Experience working with industry-leading VoiceXML Browsers (e.g. Genesys, Avaya, Nortel, etc.)
Excellent communications skills – written and oral
Willingness to travel as required
Desirable:
experience in the Communications Services Industry
published author in speech science (books, papers, journals, etc.)
Desired Skills and Experience
Key Responsibilities:
Work closely with Product/Project Managers, Architects, Business Analysts, VUI Designers, Software/Systems Engineers, Customers, & Partners to understand and define the product roadmap & requirements for VoiceCare solutions from a speech recognition perspective
End-to-end grammar development and performance of existing & new VoiceCare solutions
Streamlining our existing grammars to maximize and promote reusability and common code
Innovating and improving our continuous application grammar tuning process – especially around Natural Language Grammars & Statistical Language Models
Maintaining, Supporting, Enhancing, & Improving our grammars
Learning to first use all existing Speech Science tools, and then helping to maintain and improve our tools
As a key member of a cross-functional team, monitor, analyze, & tune our VoiceCare products on a regular basis to improve performance against 3 key dimensions:
Application Effectiveness (how well is the application performing against the business objectives, for example, maximizing call-completion-rate)
Caller Experience (how good is the overall caller’s experience in using the application)
Speech Recognition Accuracy (how well does the application understand what the callers are saying)
Identifying, Defining, Improving, and Applying best-practices in grammar design, development, & tuning
Delivering high-quality application/grammar releases to QA and into production with zero or minimal defects
Providing constructive feedback and suggestions for improvements on our products, platforms, tools, & business processes to the appropriate departments
Working professionally, productively, and effectively with all cross-functional team-members both within the company and outside the company (e.g. customers, partners, & vendors)
Staying current on the latest industry trends, technologies, practices, & innovation in speech recognition technology – introducing relevant ideas into the company as appropriate
Representing the company as a Speech Science Subject Matter Expert in customer facing meetings, conferences, trade-shows, & seminars as appropriate
Promoting the company’s brand & presence in the Speech Science community by publishing technical papers and articles as appropriate
About this company
Synchronoss Technologies (NASDAQ: SNCR) is the mobile innovation leader that provides personal cloud solutions and software-based activation for connected devices across the globe. The company’s proven and scalable technology solutions allow customers to connect, synchronize and activate connected devices and services that empower enterprises and consumers to live in a connected world. 

For more information visit us at: 
Web: www.synchronoss.com
Blog: http://blog.synchronoss.com
Twitter: http://twitter.com/synchronoss




-------------------------------------------------------


Job Category: Software Engineering: Development
Location: Bellevue, WA, US
Job ID: 852476-124202
Division: Applications and Services Engineering

Microsoft is hiring Speech Scientists with outstanding speech technology development skills to develop and advance Microsoft's core speech technology. 

As one of Microsoft’s most exciting initiatives, speech recognition is driving the adoption of natural user interfaces as it becomes more accurate and robust, its application becomes more useful - especially for understanding user’s intent in the mobile and entertainment domains. At the foundation are the science and technology that enable the user experiences and power our network services, clients and devices. The Speech Science and Technology Team makes possible voice features in Enterprise, Entertainment and Mobile products and particularly in the voice platform that powers Xbox Kinect voice search and command & control (C&C), Bing voice search, and short message dictation and C&C in Windows Phones. Our group brings together talent in the areas of recognition, synthesis and modeling (machine leaning & classification) to develop and deliver robust, natural and scalable speech recognition and speech synthesis across a rich set of scenarios and languages. 

We are looking for high achievers with a deep speech R&D background characterized by concrete contributions that have advanced the state of the art of either commercial or academic speech recognition systems. As member of the Speech Science and Technology Team, you will help drive the development of speech recognition core technology and computing infrastructure. We have huge opportunities of research and development on real data sets provided by one of the world’s largest speech data feedback loop.

You must have a track record of thought leadership, leading innovation and creative vision. You are up to the technical & competitive challenges of a fast moving technology impacting many of Microsoft’s key businesses. 

Essential attributes and competencies include: 
•Excellence in scientific thinking and execution
•Passion for new UI paradigms incorporating speech technologies
•Ability to own and drive experiment definition, investigations, and development and ultimately be responsible for the speech recognition performance of key scenarios that enable the release of new Microsoft products and services. 

Additional qualifications for this position include: 
•PhD in CS/EE with focus in one or more of acoustic modeling, language modeling, statistical modeling, search, machine Learning, noise robust signal processing, or equivalent experience
•5+ years of Speech R&D in an academic or commercial setting, and software development skills and aptitude for software design and coding
•3+ years of experience in C/C++, large scale computing, and programming with scripting languages.


O:IPE
XIPEX:YAPA*LI

About this company
AMAZING THINGS HAPPEN HERE! 

At Microsoft, we're about helping customers realize their potential. From gamers to governments, moms to mega-corporations, we serve just about every kind of customer, all over the globe. 

Many people think Microsoft = software. We do do software-but we also do hardware, services, research, and more. We work on PC operating systems and applications-like Windows and Windows Live. Products for IT professionals and developers-like Windows Server and Visual Studio. Online services such as Bing and MSN. Business solutions like Office and Exchange. And devices like Xbox, keyboards, webcams, and mice. We're passionate about what we do. 

What this means if you come to work here is opportunity-to do things that make a real difference in millions, even billions, of lives. To reach your potential. So why not take a closer look at Microsoft? We think you'll find that amazing things really do happen here.





-------------------------------------------------------------------------------------

The Media Group is responsible for the development of our cutting-edge internet telephony (VoIP) and media processing components. We are working on some exciting new product features related to speech recognition, speech synthesis, and speech analytics.

If you are self-motivated, driven, and have a perfectionist streak, we have an exciting opportunity for you. We are looking for an individual with the skills to do phonological analysis on a language and integrate it into speech recognition, speech synthesis, and natural language processing engines. The ability to create and fine-tune pronunciation dictionaries for transcribing large datasets is highly desirable.

Qualifications we look for: 

Strong background in phonology and computational linguistics
Proficient in phonology and linguistics of multiple languages
Experienced in development of language resources for speech recognition and text-to-speech technologies in several languages including English
Strong understanding of fundamentals in acoustic phonetics – proven ability to learn the nuances of languages that you do not speak
Proficient in the International Phonetic Alphabet for transcription of dictionaries
In depth understanding of morphology and syntax of English and at least one other language
Proven experience in understanding of processes and heuristics involved in automating linguistic tasks such as stress prediction, prosody prediction and syllabification
Masters or Ph.D. in Computational Linguistics, Phonology, Computer Science, Artificial Intelligence or related field

Qualifications that would be a plus:

Experience with designing and executing large-scale data collection efforts for multiple languages
2+ years of experience with speech recognition, text-to-speech, or natural language processing
2+ years of experience with data mining, pattern recognition, and machine learning techniques

This position is not eligible for H-1B or any other kind of temporary or permanent sponsorship for work authorization by Interactive Intelligence. Therefore, if you will require sponsorship from us for work authorization now or in the future, we cannot consider your application at this time. 

To all recruitment agencies: Interactive Intelligence, Inc. does not accept unsolicited agency resumes. Please do not forward resumes to Interactive Intelligence, Inc. employees or any other company location. Interactive Intelligence, Inc. is not responsible for any fees related to unsolicited resumes and will not pay fees to any third-party agency or company that does not have a signed agreement with the Company. 



The Media Group is responsible for the development of our cutting-edge internet telephony (VoIP) and media processing components. The Group is working on some exciting new product features related to speech recognition, speech synthesis, and speech analytics.
About this company
Call center software is what other vendors offer. With the unified IP business communications offering from Interactive Intelligence, your business gets complete solutions. A complete contact center platform for multichannel interactions, offered with a choice of deployment models – as a cloud-based service, on-premise, or a managed service where we do it all. IP telephony that extends scalable, application-rich IP PBX and messaging functionality throughout the enterprise. Business process automation to automate multi-step people-centric processes, and document management solutions to handle even the most extensive information volumes. Our industry solutions are complete and tailored with expertise in accounts receivable management, financial institutions, government agencies, the insurance industry, and outsourcers.

For your contact center, your enterprise and your remote and mobile workforce, our SIP-based all-in-one IP platform unifies communications as well as your business. It provides an alternative to the complexity of hardware-centric, multipoint systems. With a single integrated platform, you get ACD and multichannel queuing, IP PBX capability, screen and call recording, predictive dialing, voice mail and unified

------------------------------------------------------------


Play a part in the next revolution in human-computer interaction. Contribute to a product that is redefining mobile computing. Create groundbreaking technology for large scale systems, spoken language, big data, and artificial intelligence. And work with the people who created the intelligent assistant that helps millions of people get things done — just by asking. Join the Siri team at Apple.

The Siri Speech team is looking for exceptionally skilled engineers to work on the core speech recognition technologies underlying Siri.

Key Qualifications

Strong analytical and problem solving skills
Outstanding communication skills, both oral and written
Exceptional programming skills in at least C or C ; polyglots preferred
Prior experience building distributed systems

Description

Siri has become one of the most used speech products in the world while revolutionizing how people interact with mobile devices. The Siri Speech team is looking for a full-stack engineer to work on automatic speech recognition for Siri. To succeed in this role, you must be a strong programmer and a creative problem solver who thrives in a fast-paced environment, working across teams and organizations. You love building distributed systems at massive scale, tackling impossible problems, and you have a passion for customer experience. You enjoy learning new things and creating life-changing products. 

You possess strong analytical skills and an interest in speech recognition, machine learning, or working with big data. You have deep technical capabilities and strong communication skills. You're fascinated by the hard problems that come with building Internet-scale cloud services. You enjoy learning new things and creating life-changing products as part of a small, talented team of researchers and engineers building services that scale to support hundreds of millions of iOS devices all over the world.

Education

B.S. or M.S. in Computer Science or equivalent experience

Additional Requirements

Experience with ASR systems in multiple programming languages
Familiarity with speech recognition toolkits (HTK, Attila, Kaldi, SRILM, OpenFST, etc.)
Interest in machine learning and natural language processing
About this company
Apple designs Macs, the best personal computers in the world, along with OS X, iLife, iWork and professional software. Apple leads the digital music revolution with its iPods and iTunes online store. Apple has reinvented the mobile phone with its revolutionary iPhone and App Store, and is defining the future of mobile media and computing devices with iPad.





----------------------------------------------------------

Technology differentiation is important to Honeywell.  Our teams utilize advanced research in speech recognition to create leading edge technologies.

The Honeywell Automation and Control Solutions (ACS) Lab is an enterprise focused on sensors, wireless, and speech technology developments which are leveraged across Honeywell’s ACS businesses  to develop next generation products. 

At Honeywell, we look for people driven by a desire to be challenged, contribute and grow.  Our people make Honeywell a special company and are a key competitive advantage.

We are currently looking for a Speech Applications Architect to be a thought leader and engage in research, design, and development of technologies for speech recognition.  
What are we doing? http://online.wsj.com/article/PR-CO-20130919-904989.html?mod=crnews
What's it look like?  http://www.multivu.com/mnr/63429-honeywell-first-voice-activated-cloud-connected-thermostat-diy-homeowners
If you are well-versed in Nuance, Microsoft TellMe, AT&T Watson, CMU Sphinx, or similar, and have an understanding of speech recognition algorithms and Natural Language Processing, we’d love to talk with you!





------------------------------------------------------------------


Basic Qualifications - Years of Experience:

10 years

Scope of Responsibility/Expectation:

Motorola is seeking a person with ability to develop speech recognition solutions/algorithms for mobile platforms with solid programming skills, experience with mobile/embedded systems and interest in deploying industry-leading speech technology. 

Software programming
Implements speech recognition, synthesis, and voice processing algorithms and related software on hardware platforms
Evaluates hardware platforms for feasibility of implementing Speech technologies.
"Ports" existing technology software to various platforms including DSPs.
Develops and maintains tools to support the application of Speech technologies.
Defines and implements simulations and scripts for validating, evaluating, and improving the performance of Motorola’s Speech technologies.
Technology Algorithms
Learns and understands existing proprietary voice algorithms in depth.
Collaborates with the speech technologists on algorithm development and improvement of speech recognition, synthesis, and voice processing algorithms.

Specific Knowledge/Skills:

Broad and deep experience in signal processing for speech.
Solid knowledge of analytic techniques, statistics, mathematical modeling.
Proficiency in assembly language and embedded "C", demonstrated by a past primary programming role in one or more fully-released products.
Proven ability to work creatively in small teams and solo; equally adept at high-level algorithmic software design and low-level code optimization.
Ability to develop software from existing code, detailed specification, or general conceptual outline.
Experience with inter-process communications, message passing/queuing.
Good oral and written communication skills, ability to exchange and debate complex technical concepts.
Familiarity with embedded systems hardware, ADCs, DACs, ability to read schematics.

Preferred:
Five years programming experience within a product development environment.
In-depth knowledge of multiple hardware platforms, especially DSPs.
Experience with Perl, Tcl/TK, or similar scripting languages.
Experience with Git or other revision control system.

Motorola Mobility, owned by Google, fuses innovative technology with human insights to create experiences that simplify, connect and enrich people's lives. Our portfolio includes converged mobile devices such as smartphones and tablets; wireless accessories; end-to-end video and data delivery; and management solutions, including set-tops and data-access devices.

















------------------------------------------------------------
Job Category: Legal & Corporate Affairs
Location: Redmond, WA, US
Job ID: 857407-129334
Division: Legal & Corporate Affairs

What is multimedia? Streaming media, codecs, speech and natural language processing, image processing, conferencing, and more. Multimedia is ever-present - images and video on speech enabled phones, computer/camera combinations to enable conferencing, and movies and more delivered to games consoles. Multimedia in a legal group? Yes! We are looking for multimedia experts to join our patent team to help us evaluate deeply technical patents and help us shape the direction of this portion of Microsoft’s patent portfolio. 

The Innovation and Intellectual Property Group’s (IIPG) Patent team is an interdisciplinary team responsible for creating, maintaining, and executing Microsoft’s patent strategy. We are attorneys and engineers and legal specialists who work together to develop this key element of Microsoft’s strategy - patents. 
We are looking for several individuals who combine a deep knowledge of some technical area of multimedia with an interest in learning about intellectual property. It is a plus if you have the ability to understand technology markets so you can anticipate the trends driving them. In this role you will work closely with licensing executives, engineers in Microsoft’s various businesses, and patent prosecution and litigation attorneys in support of Microsoft’s patent development, licensing programs, litigation efforts and acquisition efforts. Exemplary responsibilities of the position include but are not limited to:
- Researching technology areas, including working with Microsoft engineering teams, to identify critical areas of technology that the will be important to Microsoft in the near future
- Identifying and analyzing patents in Microsoft’s portfolio that may be relevant to technologies that are being deployed widely 
- Reviewing patent applications during key points during the patent lifecycle and providing detailed guidance on prosecution, international filing decisions, continuations, abandonments, etc. 
- Reviewing potential ideas and providing input on which ones should be patented
- Identifying, analyzing and making recommendations on patent portfolios that IPG is evaluating for acquisition to verify its condition and to inform its potential value
- Evaluating 3rd party patents, when necessary, in support of licensing negotiations or litigation, including assisting with developing strategy and representing Microsoft in patent licensing negotiations, if necessary
- Managing and reviewing work product of external vendors that provide portfolio analysis and patent mapping services 
- Contributing to the IPG’s strategy development process by applying your technical proficiency and creative problem solving skills to recognize market trends, opportunities and threats.
Desired qualifications:
10+ year’s work experience and/or research in at least one of the following technical areas. This experience can be education, work experience, or a combination of the two, but must be at a technologically deep level. Experience working with multimedia in the devices and services space is a plus, as is knowledge of media services exposed by cloud computing systems. Ideally, an understanding of the current state of the technology in use by all major players in the industry.
o Audio and/or Video Codecs: Information theory and data compression. Strong knowledge of at least one current high definition video codec such as H.264/HEVC, VP8/9 or audio codec such as Vorbis, Opus, Siren. Deep understanding or direct experience implementing multimedia data compression and decompression algorithms. 
o Speech: Statistically-based trained and untrained speech recognition systems, ideally with modern conversational understanding systems. Strong technical understanding of deep neural networks or hidden Markov models and the underlying science behind them. Speech recognition engines, speech-enabled applications, or speech recognition services/APIs exposed by cloud computing systems.
o Image processing: All aspects of image processing from acquisition and compression to processing and error/image correction. 
o Streaming media. Experience with development of the technologies underlying playlists, trick play, time shifting, media player. Conferencing systems including speaking person recognition, 
A degree in computer science, computer or electrical engineering or physics or significant work experience in software or hardware development 
Experience in patent analysis and/or supporting patent licensing activities is desirable but not required
Strong communication and presentation skills, with the ability to explain complex situations to diverse audiences
Demonstrated outstanding written and verbal communication
Demonstrated advanced knowledge of Excel, proficiency in PowerPoint and ability to ramp quickly on a variety of analytical patent tools
Demonstrated ability to partner on projects with others, working across the IP organizations

This description has been designed to indicate the general nature and level of work performed by an employee within this position. The actual duties, responsibilities and qualifications may vary based on assignment or group. Microsoft is an Equal Opportunity Employer (EOE) and strongly supports diversity in the workplace.









gh.com
cnw
CB:Legal

About this company
AMAZING THINGS HAPPEN HERE! 

At Microsoft, we're about helping customers realize their potential. From gamers to governments, moms to mega-corporations, we serve just about every kind of customer, all over the globe. 

Many people think Microsoft = software. We do do software-but we also do hardware, services, research, and more. We work on PC operating systems and applications-like Windows and Windows Live. Products for IT professionals and developers-like Windows Server and Visual Studio. Online services such as Bing and MSN. Business solutions like Office and Exchange. And devices like Xbox, keyboards, webcams, and mice. We're passionate about what we do. 

What this means if you come to work here is opportunity-to do things that make a real difference in millions, even billions, of lives. To reach your potential. So why not take a closer look at Microsoft? We think you'll find that amazing things really do happen here.










---------------------------------------------------------------



Basic Qualifications - Years of Experience: 10 years

Scope of Responsibility/Expectation:

Seeking Computational Linguist with interest in speech user interface technology. Must be able to juggle multiple projects and priorities, and be very organized with strong attention to detail. Must be a fast learner interested in phonetics, phonology and technical concepts.

- Develop, tune and test speaker-independent recognition sets and speech prompts.

- Recruit and record participants for tuning and testing of speech recognition sets.

- Recruit participants for QA testing of tuned recognition sets.

- Review recognition vocabulary lexical properties and menu structure for optimal product performance.

- Provide guidance on dialogue design and recognition set makeup for optimal speech recognition success rate.

Specific Knowledge/Skills:

- B.A./B.S. in Linguistics required; M.A in computational Linguistics preferred or comparable experience.

- Job experience with demonstrated focus in computational linguistics required.

- Excellent understanding of phonological and phonetic concepts required.

- Ability to interpret spectrograms and transcribe in IPA required.

- Computer literate (Windows, DOS) and experience with Microsoft applications required.

- Experience using tools such as Audition, Peak, Sound Forge, or Praat preferred.

- Experience with Linux, Cygwin, C/C , Tcl preferred.

- Knowledge of WorldBet or SAMPA preferred.

- Working knowledge of multiple foreign languages preferred. 

- Some knowledge of speech recognition algorithms and/or natural language processing (NLP) a plus.
About this company
Motorola Mobility, owned by Google, fuses innovative technology with human insights to create experiences that simplify, connect and enrich people's lives. Our portfolio includes converged mobile devices such as smartphones and tablets; wireless accessories; end-to-end video and data delivery; and management solutions, including set-tops and data-access devices.




-------------------------------------------------------------------

Play a part in the next revolution in human-computer interaction. Contribute to a product that is redefining mobile computing. Create groundbreaking technology for large scale systems, spoken language, big data, and artificial intelligence. And work with the people who created the intelligent assistant that helps millions of people get things done — just by asking. Join the Siri Speech team at Apple. 

The Siri team is looking for exceptionally skilled and creative Engineers eager to get involved in hands-on work improving the Siri experience.

Key Qualifications

Experience building and tuning large vocabulary speech recognition systems
Ability to implement experiments using scripting languages (Python, Perl, Ruby, bash) and tools written in C/C
Experience working with standard speech recognition toolkits (such as HTK, Attila, Kaldi, SRILM, OpenFST or equivalent proprietary systems) is preferred
Experience working on ASR systems in multiple languages is preferred
Large scale data analysis experience using distributed clusters (e.g. MapReduce) is preferred

Description

You will be a part of a team that's responsible for a wide variety of speech-related research and development activities, including acoustic modeling, language modeling and tools development. Our speech recognition research is typically data driven and we are particularly interested in unsupervised techniques to leverage large quantities of data. You should be passionate about creating phenomenal products.

Because you'll be working closely with engineers from a number of other teams at Apple, you’ll need to be a team player who thrives in a fast paced environment with rapidly changing priorities.

Education

M.S. or PhD in Computer Science or related fiel
About this company
Apple designs Macs, the best personal computers in the world, along with OS X, iLife, iWork and professional software. Apple leads the digital music revolution with its iPods and iTunes online store. Apple has reinvented the mobile phone with its revolutionary iPhone and App Store, and is defining the future of mobile media and computing devices with iPad.






---------------------------------------------------------

Basic Qualifications - Years of Experience:

10 years

Scope of Responsibility/Expectation:

Seeking technologist with strong background in speech recognition
Ability to develop and implement new or derived algorithms within the limitations of low-power and embedded platforms
Ability to prosper and work collaboratively with an innovative technology development team
Driving research and development to improve quality of acoustic modeling within limitations of wide-ranging systems, from very small to very large footprints and in noisy environments
Identifying and transferring new and existing technologies to embedded platforms, including different processors, memory subsystems, operating systems (or lack thereof) and audio channels
Experience in porting algorithms to embedded processor and mobile platforms

Specific Knowledge/Skills:

Ph.D. with a focus on speech recognition or related field preferred
Demonstrated ability to lead technology evolution with disciplined processes shown by papers and past accomplishments required.
Demonstrated ability in speech recognition systems required.
Fundamental understanding of computer architecture required.
Strong knowledge of C required.
Strong knowledge of Tcl and/or other scripting language preferred.
3 or more years of experience in the speech recognition industry preferred.
About this company
Motorola Mobility, owned by Google, fuses innovative technology with human insights to create experiences that simplify, connect and enrich people's lives. Our portfolio includes converged mobile devices such as smartphones and tablets; wireless accessories; end-to-end video and data delivery; and management solutions, including set-tops and data-access devices.






---------------------------------------------------------

Nuance's Mobility Division builds innovative, intelligent and intuitive touch and speech interfaces to simplify and enhance the way people interact with mobile devices, applications, and services. Nuance Mobile solutions make mobile devices and in-car systems easier to use, automate customer self-service, and optimize the access and discovery of even the most advanced mobile applications and content - regardless of technical know-how, location, environment, or physical and literacy capabilities. 

Responsibilities:
Conducting experiments to assess the quality of language models and study the effect of language modeling variants and ancillary natural language processing technology (such as auto-punctuation) on speech recognition accuracy.
Collecting, cleaning, processing and conditioning training data.
Analyzing field data in order to identify areas of possible improvement or enhancement of language modeling process, method, and NLP techniques.
Implementation of improved training recipes and NLP prototypes utilizing programming skills in C/C++, Perl, and Python.
Discussing and presenting ideas and results within the team, reporting progress on a regular basis
Location can be any of the follwoing: Burlington USA, Aachen Germany or Montreal Canada
Desired Skills and Experience
Background in speech technology, statistical machine translation or natural language processing
Solid programming skills (C/C++, Python)
Experience using Unix/Linux and shell scripting
Solid written and oral communication skills (English)

Preferred Skills:
Explicit (industry or academic) experience with large vocabulary speech recognition
Expertise in one or more of the following areas: machine learning, statistical modeling, numerical optimization, data mining, algorithm design, software engineering, computational linguistics

Education:
Advanced degree (Master or PhD) in computer science, computational linguistics, applied mathematics, or a related field

Education: M.S., or Ph.D. in Electrical Engineering or Computer Science or related areas
About this company
Nuance's Mobility Division builds innovative, intelligent and intuitive touch and speech interfaces to simplify and enhance the way people interact with mobile devices, applications, and services. Nuance Mobile solutions make mobile devices and in-car systems easier to use, automate customer self-service, and optimize the access and discovery of even the most advanced mobile applications and content - regardless of technical know-how, location, environment, or physical and literacy capabilities.





--------------------------------------------------------------

Note: This role can also be located in Seattle, WA; Montreal, Canada; Achen, German; and Sunnyvale, CA.

Overview:
Nuance's Mobility Division builds innovative, intelligent and intuitive touch and speech interfaces to simplify and enhance the way people interact with mobile devices, applications, and services. Nuance Mobile solutions make mobile devices and in-car systems easier to use, automate customer self-service, and optimize the access and discovery of even the most advanced mobile applications and content - regardless of technical know-how, location, environment, or physical and literacy capabilities. 

Responsibilities:
Develop acoustic models for a variety of languages for mobility projects, both for direct to consumer and for specific customer deployments.

Build acoustic models for multiple international languages to meet product accuracy and speed requirement
• Evaluate and develop different acoustic modeling techniques to optimize the training process for different languages
• Address language specific issues for different languages to optimize acoustic modeling strategy
• Diagnose field problems, optimize recognition accuracy for important customers 
• Improve training and testing environment to increase the efficiency of language development process
• Document language development process and ensure reproducibility of delivered models
In co-ordination with other groups in the company, collect, organize and process speech data, optimize lexicon for product languages
Desired Skills and Experience
Qualifications:
• Ph.D. or M.S., computer science with specialty in speech recognition
• good analytical and diagnostic skills, quick learning
• desire and ability to be a team player 
• experience in UNIX environment and strong in scripting languages such as Perl and Python
• Knowledge of multiple languages is a strong plus
• previous experience in speech recognition is a plus
4 years experience or equivalent.
About this company
Nuance's Mobility Division builds innovative, intelligent and intuitive touch and speech interfaces to simplify and enhance the way people interact with mobile devices, applications, and services. Nuance Mobile solutions make mobile devices and in-car systems easier to use, automate customer self-service, and optimize the access and discovery of even the most advanced mobile applications and content - regardless of technical know-how, location, environment, or physical and literacy capabilities.

10 lis 2013

Pacific Voice Conference

Przypominamy o konferencji Pacific Voice Conference która odbędzie się w Krakowie w kwietniu 2014 roku. Zgłoszeń jest już ponad 50, kolejne można nadsyłać do końca listopada. Zgłoszenia są weryfikowane na podstawie streszczeń, jednakże będzie także możliwość opublikowania pełnych artykułów we współpracy z IEEE i PVSF.

www.dsp.agh.edu.pl

1 lis 2013

Rekordowy miesiąc dsp.agh.edu.pl

Miniony październik był kolejnym rekordowym miesiącem popularności www.dsp.agh.edu.pl. Pobił rekord z września, kiedy to dzięki zainteresowaniu na fanpage'u AGH naszą Klaudią w ciągu tylko jednego dnia mieliśmy prawie 1000 wejść. Czy to rosnące zainteresowanie przetwarzaniem sygnałów czy początek roku akademickiego i zajęcia z transkrypcji fonetycznych? ;)

25 paź 2013

Language and Technology Conference w Poznaniu

Nasz artykuł "A Comparison of Polish Taggers in the Application for Automatic Speech Recognition" został zaakceptowany na konferencji LTC, która odbędzie się w Poznaniu 7-9 grudnia. Artykuł opisuje najbardziej znane i ogólnodostępne taggery dla języka polskiego: WMBT, Pantera, WCRFT i Concraft. Ocenia je także pod kątem przydatności w modelowaniu języka na potrzeby rozpoznawania mowy.

www.dsp.agh.edu.pl

13 paź 2013

Ogólnopolska Studencka Konferencja Akustyków OSKA 2013

Oto informacja którą otrzymaliśmy:
"Szanowni Państwo, Koleżanki i Koledzy, W imieniu Studenckiego Koła Akustyków Uniwersytetu Adama Mickiewicza w Poznaniu mam przyjemność zaprosić Państwa na 1. Ogólnopolską Studencką Konferencję Akustyków OSKA 2013. Chcemy tym wydarzeniem uczcić niedawne powstanie naszej organizacji oraz przyczynić się do popularyzacji akustyki w polskim środowisku akademickim. OSKA odbędzie się w dniach 6-8 grudnia w Poznaniu, wszystkie szczegóły pojawią się w najbliższych dniach na stronie ska.home.amu.edu.pl/. Gorąco zachęcamy do tego, aby wziąć w konferencji czynny udział i wystąpić z referatem. Nie zamierzamy też zapomnieć o aspekcie integracyjnym - przewidujemy kilka atrakcji, które umilą wspólnie spędzony czas i być może pomogą stworzyć grunt do owocnej współpracy między studentami naszych uczelni w przyszłości. Chętnie odpowiemy na wszystkie Państwa pytania. Będziemy wdzięczni, jeśli zechcecie Państwo przekazać tę wiadomość jak najszerszemu gronu. Z pozdrowieniami, Jacek Biernacki członek zarządu Studenckiego Koła Akustyków UAM."

7 paź 2013

Medal dla naszych systemów na International Invention Show & Technomart INST 2013 w Taipei

Naukowcy z Zespołu Przetwarzania SygnałówKatedry Elektroniki Akademii Górniczo-Hutniczej w Krakowie, realizujący projekt „Biometryczna Weryfikacja i Identyfikacja Głosu” pod kierownictwem dr inż. Jakuba Gałki  zaprezentowali na wystawie w Taipei autorską technologię wykorzystującą biometrię głosową do identyfikacji i weryfikacji klientów w linii telefonicznej. Została ona nagrodzona srebrnym medalem w kategorii: software solution, functionality, know how.

Biometria głosowa to najbardziej praktyczna ze wszystkich systemów biometrycznych technologia, która nie wymaga specjalnych urządzeń typu skanery, czytniki i pozwala zweryfikować osobę w oparciu o pobraną wcześniej próbkę głosu.  Rozwiązanie to ma szczególnie szerokie zastosowanie w pracy biur obsługi klienta, dając np. możliwość zamiany tradycyjnych kodów PIN i haseł na próbkę głosu klienta.



Taka zamiana to nie tylko bezpieczeństwo i wygoda dla klienta, który np. telefonicznie bez pomocy klawiatury telefonu czy komputera  będzie mógł dokonać przelewu, ale przede wszystkim duża oszczędność kosztów działalności biur obsługi. Czas weryfikacji klienta przy wykorzystaniu tej technologii można skrócić się nawet do kilkudziesięciu sekund co przy dużej ilości rozmów telefonicznych przekłada się na wymiernie oszczędności.  Dlatego przy dużej skali, tego typu inwestycja bardzo szybko może  się zwrócić.
Dostępne na świecie inne rozwiązania wykorzystujące biometrię głosową do weryfikacji i identyfikacji klienta są bardzo drogą inwestycją. My oferujemy tańsze rozwiązanie – mówi dr inż. Jakub Gałka z AGH. 

Współpraca naukowców z AGH oraz działu R&D Unico Software – krakowskiego startupu oraz Techmo - spółki spin-off przy AGH, doprowadziła do powstania zwycięskiej technologii.  Dzięki tej współpracy stworzyliśmy technologię szczególnie atrakcyjną dla sektora bankowego, firm ubezpieczeniowych i telekomów. Oprócz niskiego kosztu wdrożenia,  klient będzie mógł liczyć również na dużą elastyczność w integracji z innymi rozwiązaniami IVR, stronami www itp. Rozwiązanie może być aplikowane w różnych  kanałach: Internet, telefon stacjonarne i komórkowe, infokioski – mówi Łukasz Dyląg, prezes zarządu Unico Software.

Rozwiązanie mogło powstać dzięki synergii wiedzy i doświadczeń nauki i biznesu, ale również dzięki  wsparciu środków  z NCBiR.  
Naukowcy  z AGH i Unico Software  przygotowują się już do realizacji kolejnego projektu. Tym razem będzie to "Wirtualny tłumacz komunikacji migowej".  Prace rozpoczną się jeszcze w tym roku.











Przetwarzaniemowy.pl

Paweł Jaciów wykonał nową, właściwą stronę poświęconą naszej książce "Przetwarzanie mowy". Strona znajduje się pod adresem przetwarzaniemowy.pl. Zapraszamy i prosimy o komentarz jeśli można ją uatrakcyjnić.


3 paź 2013

Siła Facebooka i Systemów Dialogowych

We wrześniu mieliśmy największą w historii liczbę wejść na www.dsp.agh.edu.pl. 30% tych wejść była w ciągu jednego dnia, w którym facebookowy fanpage AGH napisał o wirtualnym doradcy osadzonym na naszej stronie. Proszę jak to przyciąga uwagę ... Ktoś jeszcze chce mieć duży ruch na swojej stronie internetowej? :)

2 paź 2013

Z impetem w nowym roku akademickim

Ten rok akademicki jest dla naszego zespołu rekordowy. Aż 4 osoby (Maciej Dunin-Borkowski, Szymon Pałka, Tomasz Pędzimąż i Marcin Witkowski) postanowiły rozpocząć studia doktoranckie u nas. Do tej pory rekordem były 3 osoby w ciągu jednego roku. Co więcej koledzy "obstawili" wszystkie kierunki które są na naszym Wydziale, podkreślając naszą interdyscyplinarność: Maciek - Elektronika, Szymon i Tomek - Informatyka, Marcin - Telekomunikacja.

30 wrz 2013

Śmierć na Nilu

Zapraszamy do testowania prototypu gry opartego na systemach dialogowych dla języka polskiego. Śmierć na Nilu została zrealizowana przez Sandrę Imielę jako jej praca magisterska. Wszelkie uwagi w komentarzu lub emailach będą dla nas bardzo cenne.
Link do gry.

29 wrz 2013

Film National Geographic o głosie

National Geographic właśnie ukończyło realizację filmu "Mystery of Human Voice". Jednym z występujących specjalistów jest prof. Krzysztof Izdebski z Pacific Voice and Speech Foundation, przyjaciel i współpracownik Zespołu Przetwarzania Sygnałów AGH. Profesor obiecał zrobić pokaz filmu na AGH jeszcze w tym roku. Szczegóły w listopadzie.

27 wrz 2013

Strategia obecności zespołu naukowego w internecie

Zapraszam do oglądnięcia nagrania z seminarium ze mną "Strategia obecności zespołu naukowego w internecie" zrealizowanego przez portalnaukowca.pl.

"Celem seminarium jest omówienie podstawowych problemów związanych ze strategią obecności zespołu naukowego w internecie, a także przekazanie wskazówek w zakresie budowania i utrzymywania serwisu. Seminarium dostarcza odpowiedzi na pytania: kto i dlaczego odwiedza strony internetowe naukowców, jak dbać o własną stronę nie wydając na to fortuny i nie poświęcając wielu godzin pracy, z jakich stron i portali korzystać, aby popularyzować swoją obecność w sieci. Przedstawione są przykłady polskich i zagranicznych stron zespołów naukowych."

19 wrz 2013

Materiały promocyjne systemów weryfikacji i rozpoznawania mówców

Udostępniliśmy materiały promocyjne Surikate i VoicePass, które będą wykorzystywane na targach innowacji na Tajwanie. Jakieś pomysły ich usprawnienia na przyszłość? Nasze systemy biometryczne są coraz bardziej niezawodne. Liczymy na medal!

Ulotka
Rollup

18 wrz 2013

Deloitte & Forbes Executive Congress

Zostałem zaproszony do udziału w panelu dyskusyjnym "Nauka i biznes: realna współpraca czy rytualny taniec pingwinów" który odbędzie się w ramach Deloitte & Forbes Executive Congress - Gali  XIV edycji Rankingu Deloitte Technology Fast 50 Central Europe (www.deloitte.com/fast50ce).

Testy nowej wersji Sarmaty na mowie telefonicznej

W ramach realizacji wdrożenia dla firmy ADESCOM Polska przeprowadziliśmy duże zmiany w systemie rozpoznawania mowy SARMATA. Jego nowa wersja osiągnęła w ostatnich testach 92% poprawności pracując na rzeczywistych nagraniach telefonicznych i 95% na testach wsobnych.

13 wrz 2013

Próbki mowy różnych języków

Nasza strona speechsamples.agh.edu.pl udostępnia już nagrania, które udało się nam zebrać do tej pory. Zapraszamy do korzystania oraz przekazywania adresu osobom, które mogłyby dodać próbkę kolejnego języka.


12 wrz 2013

Strategia obecności zespołu naukowego w internecie

Zapraszam na seminarium "Strategia obecności zespołu naukowego w internecie" w Gmachu Głównym Uniwersytetu Ekonomicznego w Poznaniu, al. Niepodległosci 10, 25 września o 13.00. Seminarium będzie transmitowane przez portalnaukowca.pl

www.dsp.agh.edu.pl

3 wrz 2013

Problemy Kryminalistyki

Nasza praca "Zastosowanie algorytmu DTW jako narzędzia w identyfikacji mówcy" (we współpracy z koleżankami i kolegą z Instytutu Ekspertyz Sądowych) ukazał się już w najnowszych Problemach Kryminalistyki (numer 280).

W artykule omówiono problemy związane z identyfikacją mówcy i przedstawiono propozycję procedury ułatwiającej proces identyfikacji w części akustycznej. Koncepcja opiera się na metodach programowania dynamicznego, a w szczególności algorytmu znanego jako DTW (ang. Dynamic Time Warping). Przeprowadzone zostały testy, wskazujące na przydatność proponowanej procedury przy próbie ustalenia, które samogłoski oraz formanty pozwalają dostatecznie zróżnicować mówców, indywidualizując tym samym dostatecznie każdego.

www.dsp.agh.edu.pl

1 wrz 2013

Konferencja Gwara i Tekst w Krakowie

Wydział Polonistyki UJ
Katedra Historii Języka i Dialektologii WP UJ
Program konferencji
GWARA I TEKST
Kraków, 27–28 września 2013 r.
27 września (piątek)
9.00–10.00 – Rejestracja uczestników, hol przed Librarią, Collegium Maius Uniwersytetu Jagiellońskiego, 1. piętro, ul. Jagiellońska 15
Libraria, Collegium Maius UJ – obrady plenarne, godz. 10.00–13.30
Otwarcie konferencji
Wystąpienie Dziekana Wydziału Polonistyki prof. dr hab. Renaty Przybylskiej
Obrady plenarne
Prof. dr hab. Halina Pelcowa (UMCS), Tekst gwarowy – oral history w perspektywie etnolingwistycznej
Doc. Elena Syanova (Rosyjska Akademia Nauk, Sankt Petersburg), Ethnic identity of Russian-Ukrainian dialect speakers in Voronezh region
Prof. dr hab. Halina Kurek (UJ), Tożsamość kulturowa polskiej wsi w epoce globalizacji i przemian społeczno-ekonomicznych
Dyskusja
11.45–12.00 – przerwa na kawę
Dr hab. Jerzy Sierociuk, prof. UAM (UAM), Gwara jako język/mowa środowiska oralnego
Prof. dr hab. Halina Karaś (UW), O reprezentatywności tekstów gwarowych – podstawy planowanego Korpusu Gwar Polskich
Prof. dr hab. Janina Labocha (UJ), Pamięć i zapomnienie w narracji autobiograficznej
Dyskusja
Libraria – koncert Moniki Makowskiej (skrzypce) i Jana Oberbeka (gitara klasyczna), godz. 13.30–14.00
Bankiet – godz. 19.30
OBRADY W SEKCJACH
godz. 15.00-19.00
Sekcja A (ul. Grodzka 64, s. 06)
Dr hab. Maria Pająkowska-Kensik, prof. UKW (UKW), Gwara kociewska we współczesnych tekstach
Pomeranii
dr hab. Maciej Mączyński, prof. UP (UP, Kraków), Gwara podhalańska z perspektywy miłośników Tatr
Dr Zofia Kubiszyn-Mędrala (UJ), Ludowy tekst artystyczny jako źródło poznania gwar (na przykładzie
„Wesela hoczoskiego…”)
Dr Anna Mlekodaj (PPWSZ, Nowy Targ), „Gwarom sie nie ino godo”. O potencjale literackim gwary
podhalańskiej
Mgr Michał Przywara (Uniwersytet Ostrawski, Czechy), Twórczość gwarowa we współczesnej literaturze
Śląska Cieszyńskiego
Dyskusja
17.00–17.20 – przerwa na kawę
Dr Monika Kresa (UW), Między kiczem a autentyzmem – stylizacja na gwary podlaskie w serialu
„Blondynka” (reż. Maciej Gronowski)
Mgr Ilona Gumowska (UMCS), Gwara w reklamie – sposoby i funkcje wykorzystania gwary w tekstach
reklamowych
Mgr Joanna Kulczyńska (UJ), O tym jak się „słonko tyrtoli”. Gwara sandomierska w poezji Stanisława
Młodożeńca
Mgr Katarzyna Staniszewska-Kogut (UP, Kraków), Wiersze Józefa Gary jako wilamowicki tekst gwarowy
Dyskusja
Sekcja B (u. Grodzka 64, s. 302)
Dr Tomasz Kurdyła (UJ), Charakterystyka gwar krośnieńskich
Dr Małgorzata Frąckiewicz (UwB), Charakterystyczne cechy gwary łomżyńskiej jako świadectwo języka
mówionego różnych pokoleń mieszkańców Łomży i okolic
Mgr Anna Łucarz (UKW), Jo, tera je fertich masakra – gwara kociewska wczoraj i dziś
Doc. Наталія Хібеба (Lwowski Narodowy Uniwersytet im. Iwana Franki), Диференціація назв наречених
відповідно до етапів у текстах про весілля на Бойківщині
Dr Anna Piechnik-Dębiec (UJ), Gwara w języku prasy lokalnej (na przykładzie „Głosiciela”)
Dyskusja
17.00–17.20 – przerwa na kawę
Mgr Ирина Бакланова (Rosyjska Akademia Nauk, Sankt Petersburg), Особенности номинации
женских украшений в севернорусских говорах
Dr Justyna Garczyńska (UW), Analiza akustyczna samogłosek w gwarze kurpiowskiej
Mgr Katarzyna Potępa (UJ), Realizacja zmiennej (å) w polszczyźnie mówionej inteligencji wiejskiej (na
przykładzie Podola-Górowej k. Gródka nad Dunajcem)
Mgr Ewa Leśniak (UP, Kraków), Powiedzenia i przysłowia gwarowe we wsi Przyszowa funkcjonujące do
dziś
Dyskusja
28 września (sobota)
OBRADY W SEKCJACH
godz. 9.00–11.30
Sekcja A (ul. Grodzka 64, s. 06)
Dr hab. Dorota K. Rembiszewska, prof. IS PAN (IS PAN, Warszawa), Wielojęzyczność i wielokulturowość w słownikach gwarowych z obszaru pogranicza polsko-wschodniosłowiańsko-litewskiego
Dr Jan Fellerer (Uniwersytet Oksfordzki), Miejska gwara lwowska do 1914 roku – źródła i problemy badawcze
Dr Katarzyna Czarnecka (IJP PAN, Warszawa), Tekst literacki jako źródło informacji o nieistniejącej już gwarze kresowej (Andrzej Mularczyk, „Każdy żyje jak umie”)
Mgr Oksana Zakhutska (UW), mgr Ludmiła Januszewska (IJP PAN, Warszawa), Osobliwości stylistyczne polszczyzny mówionej na Ukrainie za Zbruczem (na przykładzie polszczyzny szlacheckiej i chłopskiej wybranych wsi)
Mgr Beata Bednářová (Uniwersytet Ostrawski, Czechy), Dialekt polski na Syberii
Mgr Ekaterina Popova (Uniwersytet Ostrawski, Czechy), Polskie gwary Syberii: gwara mieszkańców polskojęzycznych wsi w Buriacji i Chakasji
Dyskusja
11.00–11.30 – przerwa na kawę
Sekcja B (ul. Grodzka 64, s. 09)
Dr Maciej Rak (UJ), Tekst gwarowy w świetle statystyki leksykalnej (na materiale podhalańskim)
Dr Beata Ziajka (IJP PAN, Kraków), Konceptualizacja dziecka w języku mówionym mieszkańców wsi
Dr Iwona Bielińska-Gardziel (IS PAN, Warszawa), Polskie gwarowe nazwy rodzinne w ustnych relacjach o rodzinie i słownikach gwarowych
Dr Monika Buława (IJP PAN, Kraków), Zagadnienia aksjolingwistyczne w dialektologii – stan i perspektywy badań
Mgr Tomasz Jelonek (UJ), Miejsce i funkcja mikrotoponimów w polszczyźnie mówionej mieszkańców wsi (na przykładzie mikrotoponimii Truskolas i wsi okolicznych w powiecie kłobuckim)
Dyskusja
11.00–11.30 – przerwa na kawę
Sekcja C (ul. Grodzka 64, s. 302)
Doc. Ганна Дидик-Меуш (Narodowa Akademia Nauk Ukrainy, Lwów), Писемні тексти періоду ранньомодерної України і мовний портрет їх авторів
Dr Maria Trawińska (IS PAN, Poznań), Cechy gwarowe w świetle kursywy gotyckiej
Dr Błażej Osowski (UAM), Wielkopolskie słownictwo gwarowe w kontekście historycznym
Mgr Konrad K. Szamryk (UwB), Północnopolska leksyka gwarowa w XVIII-wiecznych kazaniach Krzysztofa Kluka
Dr Justyna Kobus (UAM), Opis statyczny czy dynamiczny fleksji gwarowej – rekonesans badawczy
Dyskusja
11.00–11.30 – przerwa na kawę
OBRADY PLENARNE – ul. Grodzka 64, s. 06
godz. 11.30–14.00
Prof. dr hab. Józef Kąś (UJ), Gwara w słowniku gwarowym
Doc. Наталя Хобзей (Narodowa Akademia Nauk Ukrainy, Lwów), Текст як ілюстрація діалектного словника
Doc. Тетяна Ястремська (Narodowa Akademia Nauk Ukrainy, Lwów), Текст як джерело дослідження діалектної системи
Prof. dr hab. Bogusław Wyderka (UO), Problemy teoretyczne współczesnej dialektologii
Dr hab. Kazimierz Sikora (UJ), Incipit w dialogicznym tekście gwarowym
Dyskusja
Podsumowanie konferencji (dr hab. Kazimierz Sikora, dr Maciej Rak)

PhD position

"We have some open positions for Phd at the Intern. Phd School of ICT in Modena for a  curriculum in computer vision.
http://www.ict.unimore.it/
themes
-egocentric vision/wareable cameras
-videosurveillance
-object recognition

please contact me for any question"
Rita Cucchiara


Prof. Rita Cucchiara
Dipartimento di Ingegneria Enzo Ferrari
http://imagelab.ing.unimore.it