Do you have sleep data, such as from a consumer wearable device with a sensor? Do you feel sort of meh about it? You’re definitely not alone.
When we start with a new recruitment chatbot project that includes the FAQ automation feature, we use a Starter Set of questions, which are then enriched with FAQS relevant for the client before go-live. After go-live, the system is put to the test with real usage, and improves over time with training.
When I say “Starter Set,” I mean something like sourdough starter: it may not look like the end goal, but it is essential for success. In a good automated FAQ system, robust algorithms are crucial; however, the dataset, especially the dataset that is often (in the world of…
Why don’t more settlements use air well architectures? Many settlements, particularly relatively young ones, live in danger of severe water shortages. A number of readers wrote in asking why they see more and more commonly large stone structures for collecting water on other worlds, but not theirs, where it would seem so needed.
OL’t: There is a wide variety of different atmospheric water generation, or air well, technologies. Many settlers who were recently in transit may be more familiar with fog fences, which were an easy initial way to generate minimal water upon arrival.
EG: After initial settling in, most…
This article highlights a historical gadget, from a possible future-history, Ayin Tay Kon’s molloscope, depicted in this stamp:
Ayin Tay Kon was a chemical anthropologist deeply fascinated by stress and trauma in humans**, which they saw as both impractically frail and remarkably robust. They invented the molloscope as a post-hoc psychological intervention to help relieve the stress of massive-distance travel. Most human beings had left Earth in a mostly-disorganized frenzy, following large-scale shifts in the planet’s capacity to sustain life, to planets that were barely-explored, often in experimental vessels. …
Call: Do you know what time it is?
Response: Yes, we know, it is the time for the Kodo.
Call: How do you know it is the time for the Kodo?
Response: The trees were [ringing] in the dark, and the earth began to [hum]; these are the parts of the Kodo.
Call: Do you remember the story of the First Kodo?
Response: Let us remember together.
This call-and-response storytelling style is common on Udur in the Dzid mountain range region. Dzid-Udur is known for its nearly-uninhabitable climate, and the necessity of its inhabitants to continuously move across regions to…
We’ve noticed that candidates approach recruiting chatbots more than half of the time outside of the working hours; so, what topics do they ask about?
In an analysis from a few months ago, we saw some broad topics:
At ACL2019, we introduced nex-cv: a metric based on cross-validation, adapted for improvement of small, unbalanced natural-language datasets used in chatbot design. The main idea is to use plausible negative examples in the evaluation of text classification.
Our experiences draw upon building recruitment chatbots that mediate communication between job-seekers and recruiters by exposing the ML/NLP dataset
to the recruiting team. Recruiter teams motivated by scale and accessibility to build and maintain chatbots that provide answers to frequently asked questions (FAQs) based on ML/NLP datasets.
Based on ~46K questions asked in 9 live chatbots, 50–60% of all incoming FAQs fall outside of working hours. After making this observation internally, we began sharing it in routine reporting. Here are some notes about what we learned while generalizing a one-time experiment to routine reporting.
In the beginning of this year, we investigated when candidates approach recruiters through a chatbot. In that analysis, we compared 5 live (anonymized) chatbots, and assumed the working hours of 8am to 6pm in Central European Time. Then, we found that around 47% came out of working hours. …
When the goal is to design chatbots that are “honest and transparent when explaining why something doesn’t work,”* how do we — practically — do that?In practice, what doesn’t work, and why, can be difficult to find out — let alone explain to the end-user.
At jobpal, explainability means actionable and understandable feedback that helps project managers iterate on a chatbot during design, implementation, and maintenance, and especially improve data quality.
Earlier this year, we participated at CHI2019 in Glasgow, in a workshop on AI and Human-Computer Interaction (HCI) to learn about the most current approaches in Explainable AI (“XAI”)…
I don’t know how I managed to be reading scientific literature for 10+ years now without having discovered the glory of JabRef, which is free, friendly, and easy to integrate with my existing research process and environment.