Go back to the career page

Soft­ware Engin­eer (AI Strategy & Enable­ment) (m/f/d)


AI · Programming


        								                            				
                            				

Your tasks

  • Lead the strategy, design, and deliv­ery of AI-driven solu­tions, span­ning LLMs, ML mod­els, and intel­li­gent workflows.
  • Archi­tect and imple­ment scal­able AI sys­tems that integ­rate smoothly with enter­prise-grade infra­struc­tures and devel­op­ment practices.
  • Col­lab­or­ate closely with engin­eer­ing, product, and lead­er­ship teams to identify where AI can deliver the most value, and help turn those oppor­tun­it­ies into real solutions.
  • Develop and main­tain internal tool­ing, frame­works, and guidelines to enable other teams to work effect­ively with AI and ML technologies.
  • Guide the adop­tion of GenAI cap­ab­il­it­ies, includ­ing prompt engin­eer­ing, RAG pipelines, and agent-based archi­tec­tures, always with a focus on long-term maintainability.
  • Ensure sys­tems are observ­able, test­able, per­form­ant, and aligned with data pri­vacy, secur­ity, and com­pli­ance needs.
  • Stay act­ively informed on the evolving AI/ML eco­sys­tem (GenAI, MLOps, vec­tor search, model serving) and eval­u­ate new tools and prac­tices for enter­prise readiness.
  • Pro­mote a cul­ture of learn­ing, exper­i­ment­a­tion, and thought­ful adop­tion of AI tech­no­lo­gies across teams.

What do we expect from you?


Key Requirements 

  • 4+ years of exper­i­ence in soft­ware or ML engin­eer­ing, with a strong found­a­tion in backend archi­tec­ture and dis­trib­uted systems.
  • Proven track record design­ing and deliv­er­ing AI-powered sys­tems in pro­duc­tion, prefer­ably in enter­prise environments.
  • Pro­fi­cient in Python (or Node.js), with deep exper­i­ence build­ing robust, main­tain­able, and scal­able services.
  • Strong under­stand­ing of LLMs, embed­dings, prompt engin­eer­ing, RAG pat­terns, and GenAI tool­ing (e.g., Lang­Chain, Trans­formers, Hug­ging Face, OpenAI APIs).
  • Com­fort­able build­ing for real-world com­plex­ity: multi-ten­ant setups, observ­ab­il­ity, per­form­ance, cost track­ing, and governance.
  • Famili­ar­ity with mod­ern infra­struc­ture tool­ing: con­tain­er­isa­tion (Docker), orches­tra­tion (e.g., Air­flow, Tem­poral), cloud ser­vices (AWS, Azure, GCP).
  • Exper­i­ence driv­ing cross-func­tional ini­ti­at­ives and ment­or­ing tech­nical teams on AI capabilities.
  • Flu­ent in Eng­lish and an excel­lent com­mu­nic­ator, able to col­lab­or­ate effect­ively across disciplines.

Nice-to-Have 

  • Exper­i­ence with model serving and infer­ence frame­works (e.g., vLLM, TensorRT-LLM, LiteLLM).
  • Famili­ar­ity with vec­tor data­bases (e.g., Qdrant, Weav­i­ate, Pinecone) and semantic search design.
  • Expos­ure to MLOps or AIOps prac­tices, includ­ing mon­it­or­ing, retrain­ing, and life­cycle management.
  • Back­ground in data sci­ence or ML bey­ond GenAI use cases—e.g., time-series, anom­aly detec­tion, recom­mend­a­tion systems.
  • Con­tri­bu­tions to open-source tools or internal enable­ment platforms.

Your tasks

  • Lead the strategy, design, and deliv­ery of AI-driven solu­tions, span­ning LLMs, ML mod­els, and intel­li­gent workflows.
  • Archi­tect and imple­ment scal­able AI sys­tems that integ­rate smoothly with enter­prise-grade infra­struc­tures and devel­op­ment practices.
  • Col­lab­or­ate closely with engin­eer­ing, product, and lead­er­ship teams to identify where AI can deliver the most value, and help turn those oppor­tun­it­ies into real solutions.
  • Develop and main­tain internal tool­ing, frame­works, and guidelines to enable other teams to work effect­ively with AI and ML technologies.
  • Guide the adop­tion of GenAI cap­ab­il­it­ies, includ­ing prompt engin­eer­ing, RAG pipelines, and agent-based archi­tec­tures, always with a focus on long-term maintainability.
  • Ensure sys­tems are observ­able, test­able, per­form­ant, and aligned with data pri­vacy, secur­ity, and com­pli­ance needs.
  • Stay act­ively informed on the evolving AI/ML eco­sys­tem (GenAI, MLOps, vec­tor search, model serving) and eval­u­ate new tools and prac­tices for enter­prise readiness.
  • Pro­mote a cul­ture of learn­ing, exper­i­ment­a­tion, and thought­ful adop­tion of AI tech­no­lo­gies across teams.

What do we expect from you?

 

Key Require­ments 

  • 4+ years of exper­i­ence in soft­ware or ML engin­eer­ing, with a strong found­a­tion in backend archi­tec­ture and dis­trib­uted systems.
  • Proven track record design­ing and deliv­er­ing AI-powered sys­tems in pro­duc­tion, prefer­ably in enter­prise environments.
  • Pro­fi­cient in Python (or Node.js), with deep exper­i­ence build­ing robust, main­tain­able, and scal­able services.
  • Strong under­stand­ing of LLMs, embed­dings, prompt engin­eer­ing, RAG pat­terns, and GenAI tool­ing (e.g., Lang­Chain, Trans­formers, Hug­ging Face, OpenAI APIs).
  • Com­fort­able build­ing for real-world com­plex­ity: multi-ten­ant setups, observ­ab­il­ity, per­form­ance, cost track­ing, and governance.
  • Famili­ar­ity with mod­ern infra­struc­ture tool­ing: con­tain­er­isa­tion (Docker), orches­tra­tion (e.g., Air­flow, Tem­poral), cloud ser­vices (AWS, Azure, GCP).
  • Exper­i­ence driv­ing cross-func­tional ini­ti­at­ives and ment­or­ing tech­nical teams on AI capabilities.
  • Flu­ent in Eng­lish and an excel­lent com­mu­nic­ator, able to col­lab­or­ate effect­ively across disciplines.

Nice-to-Have 

  • Exper­i­ence with model serving and infer­ence frame­works (e.g., vLLM, TensorRT-LLM, LiteLLM).
  • Famili­ar­ity with vec­tor data­bases (e.g., Qdrant, Weav­i­ate, Pinecone) and semantic search design.
  • Expos­ure to MLOps or AIOps prac­tices, includ­ing mon­it­or­ing, retrain­ing, and life­cycle management.
  • Back­ground in data sci­ence or ML bey­ond GenAI use cases—e.g., time-series, anom­aly detec­tion, recom­mend­a­tion systems.
  • Con­tri­bu­tions to open-source tools or internal enable­ment platforms.

Your tasks

  • Lead the strategy, design, and deliv­ery of AI-driven solu­tions, span­ning LLMs, ML mod­els, and intel­li­gent workflows.
  • Archi­tect and imple­ment scal­able AI sys­tems that integ­rate smoothly with enter­prise-grade infra­struc­tures and devel­op­ment practices.
  • Col­lab­or­ate closely with engin­eer­ing, product, and lead­er­ship teams to identify where AI can deliver the most value, and help turn those oppor­tun­it­ies into real solutions.
  • Develop and main­tain internal tool­ing, frame­works, and guidelines to enable other teams to work effect­ively with AI and ML technologies.
  • Guide the adop­tion of GenAI cap­ab­il­it­ies, includ­ing prompt engin­eer­ing, RAG pipelines, and agent-based archi­tec­tures, always with a focus on long-term maintainability.
  • Ensure sys­tems are observ­able, test­able, per­form­ant, and aligned with data pri­vacy, secur­ity, and com­pli­ance needs.
  • Stay act­ively informed on the evolving AI/ML eco­sys­tem (GenAI, MLOps, vec­tor search, model serving) and eval­u­ate new tools and prac­tices for enter­prise readiness.
  • Pro­mote a cul­ture of learn­ing, exper­i­ment­a­tion, and thought­ful adop­tion of AI tech­no­lo­gies across teams.

What do we expect from you?

 

Key Require­ments 

  • 4+ years of exper­i­ence in soft­ware or ML engin­eer­ing, with a strong found­a­tion in backend archi­tec­ture and dis­trib­uted systems.
  • Proven track record design­ing and deliv­er­ing AI-powered sys­tems in pro­duc­tion, prefer­ably in enter­prise environments.
  • Pro­fi­cient in Python (or Node.js), with deep exper­i­ence build­ing robust, main­tain­able, and scal­able services.
  • Strong under­stand­ing of LLMs, embed­dings, prompt engin­eer­ing, RAG pat­terns, and GenAI tool­ing (e.g., Lang­Chain, Trans­formers, Hug­ging Face, OpenAI APIs).
  • Com­fort­able build­ing for real-world com­plex­ity: multi-ten­ant setups, observ­ab­il­ity, per­form­ance, cost track­ing, and governance.
  • Famili­ar­ity with mod­ern infra­struc­ture tool­ing: con­tain­er­isa­tion (Docker), orches­tra­tion (e.g., Air­flow, Tem­poral), cloud ser­vices (AWS, Azure, GCP).
  • Exper­i­ence driv­ing cross-func­tional ini­ti­at­ives and ment­or­ing tech­nical teams on AI capabilities.
  • Flu­ent in Eng­lish and an excel­lent com­mu­nic­ator, able to col­lab­or­ate effect­ively across disciplines.

Nice-to-Have 

  • Exper­i­ence with model serving and infer­ence frame­works (e.g., vLLM, TensorRT-LLM, LiteLLM).
  • Famili­ar­ity with vec­tor data­bases (e.g., Qdrant, Weav­i­ate, Pinecone) and semantic search design.
  • Expos­ure to MLOps or AIOps prac­tices, includ­ing mon­it­or­ing, retrain­ing, and life­cycle management.
  • Back­ground in data sci­ence or ML bey­ond GenAI use cases—e.g., time-series, anom­aly detec­tion, recom­mend­a­tion systems.
  • Con­tri­bu­tions to open-source tools or internal enable­ment platforms.

Bene­fits preline

What you can expect




Diverse, Global & Expert Team


Further education through our Academy


Hybrid work


Career Growth


Great equipment & tools


Healthcare services


Afterwork events


Employee referral program
Our team

This application is processed by




Beatriz Agostinho

Ana Patrícia Marques
Bene­fits preline

What you can expect




Diverse, Global & Expert Team


Further education through our Academy


Hybrid work


Career Growth


Great equipment & tools


Healthcare services


Afterwork events


Employee referral program
Our team

This application is processed by




Beatriz Agostinho

Ana Patrícia Marques
Bene­fits preline

What you can expect




Diverse, Global & Expert Team


Further education through our Academy


Hybrid work


Career Growth


Great equipment & tools


Healthcare services


Afterwork events


Employee referral program
Our team

This application is processed by




Beatriz Agostinho

Ana Patrícia Marques

Join the best experts in Cloud Data & AI. We're waiting for you!


Or share with a friend



Get to know our cul­ture and become a part of the synvert team!

Jobs

Check other job offers.


top