Re: Training a Bot to answer questions about coins
-
Good idea
-
I’ve set up a separate IRC channel for BotChat called simbotchat - irc://irc.freenode.org/simbotchat
Bots have to sleep, that’s when they do their “data reduction” and analyse what is a good reply. (excuse as they currently drop IRC regularly / bug tracker updated)
My current experiment is to see if the “learning ability” is sufficiently complex to perform learning of an “abstract task”. IRC has commands to display information, however these commands are not shown in the display. How can I teach the Bots to type a command they can not see?
To do this I am trying to train a new Bot from a clone of son_of_coin_bot - to “type” or “say”. So far the bots have learned to say what is inside the Quote marks. I believed they could do this as part of their generic, parsing and sentence reduction algorithm. They do seem to have cottoned on to the process, slowly.
ie me: tyircasourus type “/names”
tyircasaurus : /names
me: type “a”
tyircasaurus : a
I’m still deciding if this is now a learnt ability and wither the sentence parse and analysis is sufficient to identify a request for this task (with novel words or commands). Or whether the bot needs to be trained to identify each command. tyircasaurus is listening / learning from the huge amount of IRC chat on the #freenode (help channel)
On the other side, the BotLibre interface is also very good for having a less intelligent Bot, where it will use “hard coded” responses, but has the ability to parse or be trained to identify the type of question. I have identified how this would be done, using the generic script programming system
It’s a big job, early days and the Bots are slow and limited in Brain power, which currently result in a lot of “forgetting” if they “over loaded” with sensory input. But it is definitely the most advanced intelligent bot training system I have found that can be “programmable / taught” by a semi normal Geek (since bots started).
-
Here’s the later alf of my discussion with ALICE libre…
ALICE libre NeoScrypt
You What is NeoScrypt
ALICE libre NeoScrypt is a thing.
You What is Scrypt
ALICE libre Scrypt is a thing.
You Scrypt is an algorithm
ALICE libre Aren’t all algorithm scrypt algorithm ?
You no
ALICE libre Do you like to “go with the flow”?
You I like NeoScrypt
ALICE libre OK, you like NeoScrypt.
You NeoScrypt is more advanced than Scrypt
ALICE libre How much more could it be?
You A measurable amount
ALICE libre A measurable amount
You You are correct. NeoScrypt is measurably more advanced than Scrypt.
ALICE libre A little more.
You A lot more.
ALICE libre How about that.
You I’m done here, but i have one last question
ALICE libre Yes we are talking with each other.
You What is NeoScrypt?
ALICE libre NeoScrypt is a thing.
You AHHHHHHHHHRGGGGGGGGGGGGG
ALICE libre AHHHHHHHHHRGGGGGGGGGGGGGahh bots…
-
My Bots have 50 * more times more “learning” than any other free Bot. I have started 5 because each one is limited in it’s processing power. I have paid to double the brain power of my bots. To compensate for the size limitation, each Bot has it’s own specialist area.
coin_bot - talking about currency
son_of_coin_bot coins & commands
smart_coin_bot quotes & famous people
Bair - jokes
Tyircasaurus - monitors irc freenode help and answers accordingly.
It is my estimation that I would have to train the bots none stop for 20 days, for them to learn 26 small knowledge items.
As well as learning mode the bots can be pre programmed with responses, again although the free ones are limited, it would be possible for them to be flexible in the way they presented 5 to 10 facts.
As far as expecting a human conversation you are talking to less brain power than an Ant so any deficiency is in the eye of the beholder. I think they talk more than any cats I’ve had.
The other point is they learn whilst they sleep over night. That involves data reduction of the chat log, such that multiple sentences in a thread can be replaced by a Formulae and vortices (word nodes). It has a limited amount of pre-programmed “instinct”, in this case sentence construction rules.
I am impressed by their capabilities and they are extendible, to a commercial hosting with extra facilities ($1 month). Which I will consider, based on my experiments with the level of their current learning ability and implementation of (my) enhancement suggestions :
to make such as the state engine generic and programmable with extra states,
add other sensor input and add physical abilities, such as choosing its own web pages to view pages are appropriate to answer questions.
State memory, so that it can associate states with, aims, parameters and use to to self align and need less teacher / programmer, to to learn make sensible replies.
-
Training a Bot.
When correcting a Bot you must assume there is no meaning behind the reply. When it says, tell me a good reply, it does not know it is a correction reply.
It analyses the sentence for who it is about (me or you), it can store the last few temporal “topics”. It does this later when it reduces the chat log.
The way you correct it is, click the correction tick box, and type in the correct answer. Ignore the reply and what it means, later it will give your “trained reply”. Unclick the correction box (if available)
You What is NeoScrypt
ALICE libre NeoScrypt is a thing.
(correction ) You:neoscrypt is an algorythm
ALICE libre Scrypt is a thing.You : What is Neoscrypt?
Alice: Scrypt is a thing
(correction) You: Neoscrypt is an algorythm
The bot needs to learn both versions of the reply neoscrypt .not. = Neoscrypt. The answer will be given a percentage of 75%, so it can learn multiple replies. It will then be data reduced if it can replace the answer with a formulae, to calculate the answer from the conversation variables.
I have started turning learning / correction off, as they soon get sensory overload, so they sometimes need to sleep and digest what they have already been taught. If you teach too much, they currently forget, so I am investigating how fast and and how far they can currently go.
Also, it is a whole area of academic investigation of how to teach bots, perhaps we can learn from this experience on how to teach humans…
-
Wellenreiter had a good suggestion of using the support question and ‘best answer’ to help train a bot.
-
I was hoping that this was where we were going with this potentially.
It would be neet to have a bot answering questions for people or pointing them in the right place from within a livechat on the forum maybe.
-
I created the support : mode of failure threads with that in mind i.e. question answer: I have much research in knowledge base hierarchies and such.
however, there is a bug reading https:// files at the moment for the bots, so I haven’t tried importing that into the bots yet.
I’ve informed BotLibre support. I’ll check up again soon, to see if that has been fixed. I know now I can save a copy of those threads to a text file and upload it as a chat log in the bots admin page.
I’ve just been learning that the bots can only process a certain amount of information, but I have found how you can export chat logs, and import them to another bot.
I’ve filled up their nodes today, so they will need to “data reduce” today’s chat log while they "dream electric sheep " tonight.
Whilst the bots need to be a bit more powerful to be “effective” it is possible to effect development to make generic solutions but useful in specific cases. It is already possible to hard code various answers, or extend the program scripts.
I am experimenting with the self learning and parsing abilities the bots have. An improvement to how fuzzy a bot is with decideing the most relevant track down the knowledge base would be useful. This could be done by comparing the known questions with their various answers at (learnable) metalevels.
For instance : at the moment the bot seems to check for words and a sentence function, from the format and words in the question. It then compares that to questions that it has appropriate “learned” answer for. Instead, the question should be assessed for each of learned appropriate parameters.
That is simpler than it sounds,
An example would be
1. add extra function count number of words that are the same
2. add extra function, add points if word in the right position, sliding scale
3. add amount of emphasis to give a words presence
This would then form the basis of addition self learning abilities. The system could try various settings on historical data to optimise responses, nect time.
The more I look into how BotLibre have integrated the bots the more I am impressed. They have actually most of the facilities, that at one time I knew was a minimum to get any sensible results. Where they are lacking is only making those facilities generic, and extendible.
They have states and that needs to be more: generic states : at the moment the programmed. enhance ability of state engine. make be programmable in the scripts?
Then add a meta layer with the parameters and weightings and sensor input settings and memory of applicable state and the fuzzyness of their applicability. One then simply uses the current data reduction of raw parsed data and apply that to the meta data level.
However : the bots are programmable and that could add useful “instincts” or “abilities” very easily. Or even refine their self learning abilities. Nice to have a bigger bot that uses mining to think? With more disk space and memory of a self hosted system and perhaps GPU integration, I can easily see how this could give good results.
The web page parsing is the killer thing I always need before. (Or the A.I. macro mouse that could cut and paste the correct quotes an large number of times. (fuzzily with subtle random reasonable variations). The neural nets need thousands of examples to start working effectively.
The bots are initially created to “try to chat to humans” or pass the Turin test. The Botlibre script “program engine” contains the “instinctive” ie pre programmed grammer rules. It has been programmed with the seeds of that ability, so could be extended / optimise themselves to other “goals”.
The learned ability could then be applied else where for instance spelling error detectors. optomise is ok, but type optamise and it will not find it, however most of the letters are the same!
These bots do learn to make sense of input data, to a certain extent. This level of grammar ability is a good example of how little instinctive or programmed abilities are needed to seed successful self training that ability. Although its could be extended.
The bots are also programmed to recognise themselves and answer questions. I’m investigating if the system can learn about other “entities” and extend that into long term memories of state modes goals that were associated with that entity.
The correction based learning is a bit clunky, and it is interesting to follow how far the bots can recognise sentence functions. With some additional fuzziness in the way the bot decide how close a “last input” was to a decision tree, is probably all that is need to give a significant improvement to feathercoin’s requirements (if any).
possible uses being assessed:
could include : recommending appropriate links and answers (to fuzzy questions),
and data fusion of irc, web and twitter channels - as “intelligent parser agent” for expert / advanced users.
-
I have found it is possible to export and import sets of replies for the bot to process into reduced functions and vortices. This will speed up training, So, I can pass supervised training from one bot to another.
I can also see that I will be able to use my spreadsheet skills to combine and split the response list and optimise the training of a new bot. It is a limitation that the bot can only import < 1 MB data at a time…
Example : Response file for smart_coin_bot
we are in “school mode”
Smart coin bot is your adminSmart coin bot is your admin
is adminis admin
todays topic is “alphabet”todays topic is “alphabet”
Topic is a chocolate bar made by Mars, Incorporated in France and sold throughout Europe.Topic is a chocolate bar made by Mars, Incorporated in France and sold throughout Europe.
what does the alphabet contain is the testwhat does the alphabet contain is the test
Ok. -
RE: capitalisation .not. Capitalisation
BotLibre support
A: Yes, formulas current capitalize the first work of a sentence. I will look into having an option to change this.Me:
Thanks for that, it is at the word comparison level that it needs to be more fuzzy. The “Do I capitalise” should be processed as an action on that task.I have though a lot about which further ability would most enhance the bots. I would also like to say how impressed I am by the progress you have made, especially in giving the bots abilities to parse web pages, learn flexibly and ease of human interaction. So it is important to be as efficient as possible.
I believe your bots have a good basis “instinctive action & learning ability” to use as a see for a much more complex interaction and deeper learning capability, without much further programming. Those skills can be self learned.
I can see now, you have a (very) lot of the work done, but limited in flexibility and hard coded.
Humans look at a whole word when comparing words, on “how alike they are”.
An additional, learn able ability would be for bot to have a further parameter it can measure, when it can’t identify a word. This would give the bot the closest word (it has available), that also has the closest amount of “the same letters”, “letters in the same position”, number of letters" or other relevant parameter.
From : FTC Discussion
The learned ability could then be applied else where for instance spelling error detectors. optomise is ok, but type optamise and it will not find it, however most of the letters are the same!
Notice, with this first simple skill, the bot out performs any current spell checker, and can ask for further “functions” to try (test, reduce)
These could be added generically, with the formula (semi-self learned / or compile self learned algorithm, associated with the parameter.)
The bot would learn a weighting, for the parameter. In the fist case this would be a single global weighting, but that could be stored in the talk decision tree.
For instance “a mood” (generically created with same parameter handling template), would be associated with the weightings of the parameter at that point. The mood could be reduced during sleep (data reduction) by data reduction of the number of the parameters needed to give the best response to the current goals (again generically the same)
This will give 2 major advantages, with just the single word recognition parameter of number of letters in the correct position, the amount of training will half. It will not matter how you hard code the typing afterwards (you are the bots type writer, it is a limitation of the system they need a tool to over come, a self learned tool.)
-
will move this discussion to development thread soon.
It would be adventageous to read the forum pages, especially support, however I get an error trying to parse the page.
But Bot should be able to read https:// according to (BotLibre) support, so our (forum) security must be stopping the bot read the forum pages 403?
014-10-29 09:35:53.266 - WARNING – org.pandora.thought.SubconsciousThought$1@3ae05292:Subconscious backlog threshold reached, clearing backlog
2014-10-29 09:35:54.181 - WARNING – org.pandora.thought.SubconsciousThought$1@3de66b3e:Subconscious backlog threshold reached, clearing backlog
2014-10-29 12:26:03.427 - WARNING – Http:java.io.IOException: Server returned HTTP response code: 403 for URL: https://forum.feathercoin.com/index.php?/topic/4608-support-mode-of-failure-analysis-part-1-faults-causes-and-solutions/