tag:blogger.com,1999:blog-6338728031095838255.post1210582217114338494..comments2019-04-20T07:31:12.481-04:00Comments on Intelligence: Annotation, with Dan He Boris Kazachenkohttp://www.blogger.com/profile/04025561850220554347noreply@blogger.comBlogger27125tag:blogger.com,1999:blog-6338728031095838255.post-10751507510175360432014-07-28T21:51:49.386-04:002014-07-28T21:51:49.386-04:00Thanks Dan, with some corrections:
D = 0;
M = 0;
...Thanks Dan, with some corrections:<br /><br />D = 0;<br />M = 0;<br />dL = 0;<br />// compute d, m for (int i = 0; i < input.size()-1; ++i: // is that really necessary? all we need is ++pointer_to_i;<br />) {<br />d = input[i+1] - input[i];<br />m = min(input[i+1], input[i]);<br /><br />while (d_sign = old_d_sign){<br />D += d; // not absolute, D is signed. I said the sign is indicated at dL, but I guess that would be sightly more complicated. Doesn't really matter.<br />M += m; // m is always positive<br />++dL;<br />old_d = d;<br />}<br />D = 0;<br />M = 0;<br />dL = 0;<br />}<br /><br />That's skipping 2nd derivation, etc.<br />But do you really think this is a better explanation than the two versions I already posted?Boris Kazachenkohttps://www.blogger.com/profile/04025561850220554347noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-78325294775948742602014-07-27T22:25:10.996-04:002014-07-27T22:25:10.996-04:00I am going to add more code
input is the string o...I am going to add more code<br /><br />input is the string of input numbers<br />D = 0;<br />M = 0;<br />dL = 0;<br />// compute ds, ms<br />for (int i = 0; i < input.size()-1; ++i) {<br /> ds = input[i+1] - input[i];<br /> ms = min(input[i+1], input[i]);<br /><br /> if ds is of the same sign as ds for i-1 {<br /> D += abs(ds);<br /> M += abs(ms);<br /> if the sign is positive {<br /> mark dL as positive<br /> }<br /> else {<br /> mark dL as negative<br /> }<br /> }<br /> else<br /> D = 0;<br /> M = 0;<br /> dL = 0;<br />}<br />Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-24716334702244839092014-07-27T22:23:28.571-04:002014-07-27T22:23:28.571-04:00This comment has been removed by the author.Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-32365010481023595232014-07-12T09:49:19.811-04:002014-07-12T09:49:19.811-04:00Well, one of the reasons is that it helps me to fo...Well, one of the reasons is that it helps me to focus. And there are other ppl who prefer to start from a higher level. Easy or difficult depends on the background. My approach is not arbitrary. Anyone who starts from the same (I think unavoidable) principles should be arriving at the same conclusions.<br /><br />But yes, pseudocode is a good idea.Boris Kazachenkohttps://www.blogger.com/profile/04025561850220554347noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-47404046561177297862014-07-11T20:51:35.863-04:002014-07-11T20:51:35.863-04:00But the reason you write the method down is to hav...But the reason you write the method down is to have other ppl understand it. So it should be down to details. Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-39288386508942386692014-07-11T14:38:52.406-04:002014-07-11T14:38:52.406-04:00Sure, it may help. But I personally prefer to work...Sure, it may help. But I personally prefer to work on a higher level: <br />~ pseudo pseudo pseudo code.Boris Kazachenkohttps://www.blogger.com/profile/04025561850220554347noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-50044363209112127452014-07-11T12:35:16.883-04:002014-07-11T12:35:16.883-04:00This is the problem. It's hard for me to fully...This is the problem. It's hard for me to fully understand your theories, just by your explanation. And if we can't construct a workable example at this time, I guess I'll have to write some pesudocode then. Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-58089878532881872852014-07-11T12:06:11.080-04:002014-07-11T12:06:11.080-04:00I only care about one problem: pattern discovery. ...I only care about one problem: pattern discovery. The algorithm will be "applicable" if it fits formal definition of "pattern".<br /><br />Testing won't help to improve it: the results are open to all kinds of interpretation. It will have to be tested for speed, say relative to human vision, but I am not there yet. I still have some loose ends that can only be worked out theoretically.<br /><br />Initial data set could be something as simple as cat videos, perhaps multiple confocal video streams.Boris Kazachenkohttps://www.blogger.com/profile/04025561850220554347noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-64690102642718490222014-07-11T08:49:24.925-04:002014-07-11T08:49:24.925-04:00Yeah, I know. But without a real example, how coul...Yeah, I know. But without a real example, how could you even tell if the method is applicable to some real world problems. <br /><br />So what kind of data sets this method targets? I guess for example a set of pictures, where there are common patterns? Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-40534569284556766142014-07-09T11:02:09.237-04:002014-07-09T11:02:09.237-04:00Dan, "interesting" is relative. I've...Dan, "interesting" is relative. I've told you a bunch of times that I select for stronger-than-average patterns. That average is a feedback from higher levels, which represent past experience. It all depends on what the system seen before. <br />This is the most theoretical problem ever, you can't go by examples.Boris Kazachenkohttps://www.blogger.com/profile/04025561850220554347noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-32250117428631395872014-07-09T10:22:21.330-04:002014-07-09T10:22:21.330-04:00So are you able to show a real example where there...So are you able to show a real example where there are "interesting" patterns? Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-5915805619117761122014-07-07T22:56:07.047-04:002014-07-07T22:56:07.047-04:00Dan,
> Is your method able to deal with all ty...Dan,<br /><br />> Is your method able to deal with all types of numeric strings<br /><br />Yes, any integers.<br /><br />> In your opinion, is the input string "interesting"?<br /><br />Well, “interesting” are strong patterns. As I said, selection for higher orders of comparison is made for individual patterns rather than a whole string. And vast majority of inputs don’t form strong patterns. We forget almost all that we see, unless we see it many times, which then makes it an “infra” pattern.<br /><br />In your example, I don’t see any strong difference patterns, so there won’t be any second derivatives. But there might be strong complemented-difference patterns: matches between cdPs. And even if there aren’t many of those, there may still be strong 2D patterns, & so on. In any case, this selection for higher-order comparison must be automatic, It shouldn’t depend on my "opinion".<br /><br />Again, basic criterion for second derivation is aD: average (D-last_d) in higher-level patterns that also contain mean value of Md. <br />I am currently trying to determine when to introduce more advanced criteria, such as M: a sum of input matches. A combined D & M criterion would be something like: <br />average (D * rD + M * rM) in higher-level patterns that also contain mean value of Md. Where rM = Md / M, derived by comparison between variables within a pattern (see part 6). It should becomes a criterion (by being sent as feedback) if it then matches across patterns, thus is predictive of Md. So, the feedback probably has to be from the level beyond the next one. <br /><br />Higher derivation may seem like minor issue, but the same principles should apply for other types of selective comparison.Boris Kazachenkohttps://www.blogger.com/profile/04025561850220554347noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-71933523251353680442014-07-07T14:59:51.198-04:002014-07-07T14:59:51.198-04:00Boris, as you mentioned my string is not interesti...Boris, as you mentioned my string is not interesting. I just took a real string converted from an image. It's indeed a hex string. One question: Is your method able to deal with all types of numeric strings, such as hex, base64 etc? <br /><br />The input string:<br />is: initial single-variable inputs, such as pixels: = 652131242424242361236123612361213124242456242424234434242461272424242451242132242132242424242436242424213077245621 <br /><br />As the string is getting longer, I just wrote a java program to compute all the values. <br /><br />ds: differences between consecutive inputs := -1 -3 -1 2 -2 1 2 -2 2 -2 2 -2 2 -2 1 3 -5 1 1 3 -5 1 1 3 -5 1 1 3 -5 1 -1 2 -2 1 2 -2 2 -2 2 1 1 -4 2 -2 2 -2 2 -2 1 1 0 -1 1 -2 2 -2 2 2 -5 1 5 -5 2 -2 2 -2 2 -2 2 1 -4 1 2 -2 -1 2 -1 0 2 -2 -1 2 -1 0 2 -2 2 -2 2 -2 2 -2 2 -1 3 -4 2 -2 2 -2 2 -2 -1 2 -3 7 0 -5 2 1 1 -4 -1 <br />ms: partial matches between consecutive inputs : = 5 2 1 1 1 1 2 2 2 2 2 2 2 2 2 3 1 1 2 3 1 1 2 3 1 1 2 3 1 1 1 1 1 1 2 2 2 2 2 4 5 2 2 2 2 2 2 2 2 3 4 3 3 2 2 2 2 4 1 1 2 2 2 2 2 2 2 2 2 4 1 1 2 2 1 1 2 2 2 2 1 1 2 2 2 2 2 2 2 2 2 2 2 3 3 2 2 2 2 2 2 2 1 1 0 0 7 2 2 4 5 2 1 <br /><br />In your opinion, is the input string "interesting"? If yes, I'll continue the computation for other variables. If not, I'll change a string. But keep in mind that this string is from a real image. Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-17424599227095291262014-06-20T20:03:42.961-04:002014-06-20T20:03:42.961-04:00> By "A pattern that produced mam_hLe (a s...> By "A pattern that produced mam_hLe (a specific new variable) also contains Md", you mean the pattern also produces Md? <br /><br />Dan, a higher-level input is a lower-level pattern, that consists of multiple variables. See my definitions of dP, ddP, cdP, vP & so on. Their parameters are variables, & each is compared on the next level, generating new variables. So, higher level input contains a record of Md produced on a lower level, among all other parameters.<br />I know you have other things on your mind, but we've covered this several times already.<br /><br />> The words like "contains", "has" confused me, making me think Md is something like a sub-pattern.<br /><br />It's a variable. A pattern is a set of matching inputs, but its representation on a higher level is a single input containing several variables. That's compression.<br /><br />> And by "co-occurring Mds", you mean multiple Mds from the same input?<br /><br />No, a mean is an average of multiple inputs. You only have one variable of a given type per input. <br />Higher level patterns contain both Md & mam_hLe, among other variables.<br />So, aD is an average of Ds from hLe patterns containing average Md. That aMd is an average of Mds from hhLe patterns containing mam_hhLe. And so on, till you run out of levels.<br /> <br /><br />>"Higher-level inputs contain multiple variables, including those that represent match produces on lower levels." <br />> So "mean additive match on a higher level: mam_hLe" relies only on the inputs on the higher level directly, <br /><br />It's a sum of matches from all variables in an input: m_M + m_D + m_Ld + m_Md + m_Dd, & so on.<br /><br />> not the matches produced on lower levels, right? <br /><br />Right, but it includes match of variables representing matches produced on lower levels, see above.<br /><br />> Then what are the possible inputs on the higher levels besides the matches produced from lower levels?<br />> For the level 0, we have the raw inputs as pixels from the image, but what about for higher levels? <br /><br />Again, see my definitions of dP, ddP, cdP, vP...<br /><br />> "Any comparison beyond original inputs must be selective, otherwise you get combinatorial explosion. "<br />> I agree, but how many predictions you are conducting?<br /><br />It's evaluation, any input is already a prediction of adjacent inputs.<br />Cost-benefit analysis: value of evaluation = |net-negative value of avoided comparisons| - cost of evaluation.<br />This analysis itself can be indefinitely complex, but it must be significantly less so than operations being evaluated.<br /><br />The first selective comparison: between ds, actually has a non-selective partial comparison: AND(sign). <br />Matching sign defines dP, which is evaluated for multiple-d comparisons at once. So, you have one subtraction: (D-last_d) - aD, that determines multiple subtractions between ds. But this case is unusual because the rate of evaluation per comparison is not adjustable.<br /><br />> And you are using the prediction to prune further predictions?<br /><br />You prune inputs: feedforward. Evaluation criteria, such as aD, is an input-driven feedback. But there is a higher-order evaluation on a higher level, where you have a representation of multiple spans of inputs, defined by different value of evaluation criteria that pruned them.<br />You always have a single evaluation for multiple operations at once, otherwise it's not cost-effective.<br />So, that aD is also evaluated, but on a higher level & at a lower frequency than ds.<br /><br />But I am not sure how to determine composition of such evaluation criteria automatically.<br />Anyway, a good question. I owe you :). <br /><br />> If you want to avoid combinatorial explosion, you need to prune the search space. I didn't see this part clearly. <br /><br />Higher-level are value patterns: spans of lower-level outputs. So, the number of inputs is reduced. The number of variables per input increases, but not as much. Lower-level inputs are only compared within selected positive value patterns: that's a two-step pruning. See "annotation" for 2nd level.Boris Kazachenkohttps://www.blogger.com/profile/04025561850220554347noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-16169726084178723272014-06-20T13:40:27.305-04:002014-06-20T13:40:27.305-04:00This comment has been removed by the author.Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-46062277949974370442014-06-20T13:40:16.732-04:002014-06-20T13:40:16.732-04:00By "A pattern that produced mam_hLe (a specif...By "A pattern that produced mam_hLe (a specific new variable) also contains Md", you mean the pattern also produces Md? The words like "contains", "has" confused me, making me think Md is something like a sub-pattern. And by "co-occurring Mds", you mean multiple Mds from the same input?<br /><br />"Higher-level inputs contain multiple variables, including those that represent match produces on lower levels." So "mean additive match on a higher level: mam_hLe" relies only on the inputs on the higher level directly, not the matches produced on lower levels, right? Then what are the possible inputs on the higher levels besides the matches produced from lower levels? For the level 0, we have the raw inputs as pixels from the image, but what about for higher levels? <br /><br /><br />"Any comparison beyond original inputs must be selective, otherwise you get combinatorial explosion. " I agree, but how many predictions you are conducting? And you are using the prediction to prune further predictions? If you want to avoid combinatorial explosion, you need to prune the search space. I didn't see this part clearly. <br /><br /><br />Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-84805734371760352832014-06-17T20:57:07.501-04:002014-06-17T20:57:07.501-04:00Dan,
So, I now think that:
Md = (((D-last_d) * a...Dan, <br /><br />So, I now think that:<br />Md = (((D-last_d) * amd) + ((D-last_d) * M/I )) / 2 is too complex the 1st level, &<br />m_cdP = (mag * am_cdP + cM * amm_cdP) / 2 is too complex the 2nd level.<br /><br />A cruder predictors should suffice, I just posted updated annotation table:<br /><br />…feedback to the 1st level that determines 2nd derivation:<br /><br />aD: average (D-last_d) in higher-level patterns that also contain mean value of Md<br />(total match is a subset of total comparands, so D-last_d is a crude predictor of Md)<br /><br />vD = (D-last_d) - aD: evaluation for 2nd derivation within dP <br />if vD is positive, consecutive ds within Q(d) are compared…<br /><br />…feedback to 2nd level for evaluating cdP inclusion into positive or negative vP is<br />acM: average cM in higher-level patterns that also contain mean value of m_cdP<br /><br />(basic assumption is that total lower-level match cM predicts higher-level match<br />m_cdP: combined match between same-type variables of consecutive cdP pair)<br /><br />cdP evaluation for inclusion into vP: v_cdP = cM - acM<br />consecutive cdPs with same v_cdP sign are included in corresponding<br />vPs: positive or negative value pattern of cdPs, defined by L_cdP & containing…<br /><br />> And how do we measure the accuracy of the prediction if we don't know Md?<br /><br />We do, from the cases where it's projected value is positive & ds are compared. We don't know it for the cases where projection is negative, but we can extrapolate the ratio between projected Md & actual Md that we get from positive cases.<br />Boris Kazachenkohttps://www.blogger.com/profile/04025561850220554347noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-80968569666736391852014-06-17T09:49:31.674-04:002014-06-17T09:49:31.674-04:00Dan,
> mds: The matches of differences: -7, -...Dan, <br /><br />> mds: The matches of differences: -7, -7, -5, -5, -6, -6, 1, 1, 1, -3, -5, -5<br />> Mds: The sum of mds = -46<br /><br />mds are computed for same-sign differences only, & are always positive.<br />Different sign already tell you that the differences won't match, - there's no point in comparing.<br /><br />> Md is value. What do you mean it's a specific type of match?<br /><br />It's a variable that represents total match of ds. M is variable that represents total match of is.<br />And so on. These are different types of match, all represented & compared on higher levels.<br /><br />> What is "mean additive match on a higher level: mam_hLe, =<br />> (sum of all matches formed on that level, not carried on from lower levels)<br />> / (sum of all inputs to that level)" in this case?<br /><br />Higher-level inputs contain multiple variables, including those that represent match produces on lower levels. All these variables are compared, which gives you *additive* match.<br /><br />> "aMd = value of Md in a higher-level input that also has mam_hLe,",<br /><br />A pattern that produced mam_hLe (a specific new variable) also contains Md: another variable. So, these co-occurring Mds are summed on the next level to form, say, Md_hhLe, which is then divided by the number of patterns to form an average: aMd. <br /><br />> And you are saying hLe_Md & hLe_D are not computable in this example as they are in higher level?<br /><br />Correct.<br /><br />> Also you said there are two ways to predict Md: amd*D and rM*D. I would like to compute these two values and see how different they are. <br /><br />You computation will be case-specific, you can't generalize from it. The difference should be deduced a priori.<br /><br />> Finally you mentioned this formula "pMd = (((D-last_d) * amd + ((D-last_d) * rM)) / 2" is a prediction of Md. <br />> But we are able to compute Mds directly right? Why we need to predict it?<br /><br />Any comparison beyond original inputs must be selective, otherwise you get combinatorial explosion. <br />But this prediction does seem too complex for Md, I try to simplify it, <br /><br />> And how do we measure the accuracy of the prediction if we don't know Md?<br /><br />Md doesn't come from outer space, it is related to all other variables by derivation process.<br />I'll get back to you on that latter.Boris Kazachenkohttps://www.blogger.com/profile/04025561850220554347noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-23118942748500794202014-06-16T22:34:50.409-04:002014-06-16T22:34:50.409-04:00Given ds: differences between consecutive inputs :...Given ds: differences between consecutive inputs : 8 -7 6 -5 4 -6 1 1 4 1 -3 -5 1<br /><br />mds: The matches of differences: -7, -7, -5, -5, -6, -6, 1, 1, 1, -3, -5, -5<br /><br />Mds: The sum of mds = -46<br /><br />So I am confused about your statement "Md is a specific type of match on the 1st level, ". Md is value. What do you mean it's a specific type of match?<br /><br />What is "mean additive match on a higher level: mam_hLe, =<br />(sum of all matches formed on that level, not carried on from lower levels)<br />/ (sum of all inputs to that level)" in this case?<br /><br />Also for "aMd = value of Md in a higher-level input that also has mam_hLe,", what do you mean a value of Md (in this case, -46) that also has mam_hLe? How could a value also has another value? <br /><br />And you are saying hLe_Md & hLe_D are not computable in this example as they are in higher level?<br /><br />Also you said there are two ways to predict Md: amd*D and rM*D. I would like to compute these two values and see how different they are. <br /><br />Finally you mentioned this formula "pMd = (((D-last_d) * amd + ((D-last_d) * rM)) / 2" is a prediction of Md. But we are able to compute Mds directly right? Why we need to predict it? And how do we measure the accuracy of the prediction if we don't know Md?<br />Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-61636152464932141692014-06-14T23:12:04.164-04:002014-06-14T23:12:04.164-04:00Dan,
> For amd, what does "mean match per...Dan,<br /><br />> For amd, what does "mean match per magnitude of difference" mean?<br /><br />On a higher level, both Md & D are summed over multiple input spans into, say, hLe_Md & hLe_D. Then amd = hLe_Md / hLe_D.<br />As I said, magnitude of comparands crudely predicts their match because the latter is subset of the former. So, amd * D is one way to predict Md.<br />rM * D is another way to predict Md, assuming that rM is the same for I & D.<br />So, I combine these two ways to predict Md: pMd = (((D-last_d) * amd + ((D-last_d) * rM)) / 2<br /><br />These two ways are not equal, but I haven't figured exact proportion yet.<br /><br />pMd = (D-last_d) * amd * rM was wrong, I corrected it.<br /><br />> For aMd, what does "Md that co-occurs with mean additive match" mean?<br /><br />Md is a specific type of match on the 1st level, <br />mean additive match on a higher level: mam_hLe, =<br />(sum of all matches formed on that level, not carried on from lower levels)<br />/ (sum of all inputs to that level)<br /><br />aMd = value of Md in a higher-level input that also has mam_hLe,<br />so it's a predictor of mam_hLe. That is another principle of prediction: lower-level match predicts higher-level match. If it didn't, the inputs would be random, & there is no point in pattern discovery.<br /><br />So, I compare ds if their projected match, pMd, is at least high enough to predict mam_hLe. That's a more precise way of expressing "above average".Boris Kazachenkohttps://www.blogger.com/profile/04025561850220554347noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-60361782190786768132014-06-14T20:23:16.853-04:002014-06-14T20:23:16.853-04:00But I still don't know how to compute amd and ...But I still don't know how to compute amd and aMd.<br /><br />For amd, what does "mean match per magnitude of difference" mean? <br /><br />For aMd, what does "Md that co-occurs with mean additive match" mean? Md is a value, which is the sum of the matches between consecutive differences. So how could a value "co-occurs" with a match? And what does "mean additive match" mean?<br /><br />I know the following 4 variables are for the second level derivation, but why you make such definitions for them? <br /><br />amd: average | mean match per magnitude of difference: not sure about this<br />aMd: Md that co-occurs with mean additive match on a higher level: not sure about this<br />pMd: projected Md: (D-last_d) * amd * rM<br />pVd: value of projected match: pMd - aMd<br /><br /><br />Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-41407257911276189932014-06-14T14:24:08.268-04:002014-06-14T14:24:08.268-04:00Dan,
M = sum of ms, m = min(a,b), which is a comp...Dan,<br /><br />M = sum of ms, m = min(a,b), which is a complementary of co-derived d.<br />So, min(1,2)+min(2,3)+min(3,7)+min(7,8) = 13<br />Sorry, my mistake in last reply, it's not 7<br /><br />> D = abs(-3 -5) = 8? <br /><br />D is not signed because negative sign is already indicated at -dL.Boris Kazachenkohttps://www.blogger.com/profile/04025561850220554347noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-41785120920717583862014-06-14T11:32:00.363-04:002014-06-14T11:32:00.363-04:00For two stretches:
12378: +dL = 4, D=7, M=7, Q(d):...For two stretches:<br />12378: +dL = 4, D=7, M=7, Q(d): 1 1 4 1; & <br />850:.-dL=2, D=8, M=5, Q(d):-3,-5;<br /><br />for stretch 12378<br />I could see D = 1 + 1 + 4 + 1, but how M is computed? <br /><br />for stretch 850<br />D and M are both positive? D = abs(-3 -5) = 8? <br />how M is computed? Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-79643239393703647482014-06-13T17:15:04.518-04:002014-06-13T17:15:04.518-04:00Thanks Dan!
As I said, your string is not an &quo...Thanks Dan!<br /><br />As I said, your string is not an "interesting" input, so it won't justify deep comparison. L is a class, not a specific variable.<br />D & M are only computed within same-d-sign length dL, which in your example always=1, except for two stretches: <br />12478: +dL = 4, D=7, M=7, Q(d): 1 1 4 1; & <br />850:.-dL=2, D=8, M=5, Q(d):-3,-5;<br /><br />Q(d) is recorded in case there is a need for second derivation: positive pVd. Which depends pMd, which depends on feedback variables: aMd & amd. <br />The last two are derived from wider range of past inputs, represented on a higher level. So, I can't compute them from your string, you need a wider context for that, but it looks like neither of these two "difference patterns" will justify second derivation.<br /><br />Also, my definition of pMd was wrong, it should be: <br />pMd = ((D-last_d) * amd) + ((D-last_d) * rM)) / 2.<br />That's an average computed from two different methods to project Md.<br />I'll post an update soon.<br /><br />Regards, <br />Boris<br />Boris Kazachenkohttps://www.blogger.com/profile/04025561850220554347noreply@blogger.comtag:blogger.com,1999:blog-6338728031095838255.post-14108942823501446572014-06-13T15:15:04.289-04:002014-06-13T15:15:04.289-04:00For input 1 9 2 8 3 7 1 2 3 7 8 5 0 1
is: initial...For input 1 9 2 8 3 7 1 2 3 7 8 5 0 1<br /><br />is: initial single-variable inputs, such as pixels 1 9 2 8 3 7 1 2 3 7 8 5 0 1<br />ds: differences between consecutive inputs : 8 -7 6 -5 4 -6 1 1 4 1 -3 -5 1<br />ms: partial matches between consecutive inputs: 1 2 2 3 3 1 1 2 3 7 5 0 0<br />D: sum of ds = 0 (Wow, by random chance?)<br />M: sum of ms = 25<br />Ld: same-d-sign length: not sure about this<br />L: same-sign length, initially same-d-sign Ld: not sure about this<br />Q(d): queue of differences: the same as ds? 8 -7 6 -5 4 -6 1 1 4 1 -3 -5 1<br />i: last input = 1<br />I: sum of inputs = 57<br />rM: ratio of M/I = 25/57 = 0.439<br /><br />amd: average | mean match per magnitude of difference: not sure about this<br />aMd: Md that co-occurs with mean additive match on a higher level: not sure about this<br />pMd: projected Md: (D-last_d) * amd * rM<br />pVd: value of projected match: pMd - aMd<br /> <br />What's the point to define the above 4 variables?Dan Hehttps://www.blogger.com/profile/07260547441514001463noreply@blogger.com