HomeAboutMailing ListList Chatter /0/0 98.80.143.34

Let's talk about AI

2023-09-01 by: Chad Smith
From: Chad Smith 
------------------------------------------------------
I haven't heard from this list in almost half a year.  Are y'all dead?
Replaced by bots?  Speaking of bots, I love ChatGPT.  What are your
thoughts?

*- Chad W. Smith*

=============================================================== From: flushy@flushy.net ------------------------------------------------------ I'll paste what I wrote in work forum about AI. This was in response to a "Bill of Material" being proposed for AI. https://redmonk.com/jgovernor/2023/10/18/introducing-the-ai-bill-of-materials/ I’m glad it’s being discussed. But discussion needs to be followed by a plan and that plan by an action. A plan would be a framework and specification that folks would implement. I think that a BOM is a start, but it’s only part of the solution: * It helps to answer input-output validation. * it does not answer the data protection issue. Data protection is not just copyright infringement, or trade secrets, but personally identifiable information, and protected information. The framework would also describe the validation points, the interfaces to validate your information or revoke its usage. Action would be incentive or motivation to implement the framework. In China’s case it’s monetary: do this or don’t do business with us. Open source might have different motivations, but either by licensing, threat of legal action, or newly invented compliance regulations and laws - we need something with teeth. Asimov may have invented the laws of robotics, but now we need the laws of AI. --b

=============================================================== From: Dan Lyke ------------------------------------------------------ I've been playing with Bard, and with GPT3.5 Turbo and GPT4, and I think "what's the source of the training data?" is a fantastic question, but... I have heard people say that CoPilot is a fantastic tool. I also know that it seems to produce a lot of code that looks remarkably like code that's licensed in various ways that... the people using CoPilot may not want to license their code under. I am willing to use it while doing work-for-hire if my employer is willing to assume all liability for potentially incorporating code with unclear licensing. But the big thing I've been seeing is that whatever it is that GPT is creating for me is... uh... people tell me it's useful, but they also spent the better part of a week trying to get language which got useful examples out of it. I have gotten a whole lot of URLs that didn't point to what they said they did, or pointed to other things, a bunch of made-up paper citations, and text which reads like a really helpful 7th grader with no filter. So... I realize that a lot of people are saying they find value in these things, and I've got coworkers who are excited about it, but I am *super* skeptical. ials/

=============================================================== From: Dan Lyke ------------------------------------------------------ Later addition: I do wonder about LLMs, or what we've learned from parsing through them as a mechanism for bringing back Interactive Fiction. I know that many of the LLM mechanisms will give a large (circa 1,500 dimensions) vector that somehow encodes a notion of semantic meaning, and that Google has been using this in searches. I submit that a lot of the decline in Google search quality comes from this. Often I'm searching for specific language, and fuzzy notions of "meaning", for values of "meaning" which lose a lot of context, are pretty crappy. For instance: I have searched for specific details of MacOS/AppKit/Cocoa API stuff, using specific terms from that API, and gotten stuff with similar meaning in Windows and Qt APIs. Which is pretty worthless. So, in conclusions: I'm not seeing it. On Mon, Nov 6, 2023 at 10:00=E2=80=AFPM Dan Lyke wr= ote: t e nt s ey rials/ .

=============================================================== From: Billy ------------------------------------------------------ it seems to produce a lot of code that looks remarkably like code that's That=E2=80=99s been my experience with most AI. At least with specialized LLM, it (a) has a lot of context and focus to prod= uce meaningful results, and (b) the folks asking know enough to spot the pro= blems I=E2=80=99ve been messing around with our Watson code generation, and it=E2= =80=99s not bad, but it=E2=80=99s not perfect. It=E2=80=99ll produce an Ansi= ble playbook that looks right, but sometimes the passed data structures aren= =E2=80=99t correct, or the module name isn=E2=80=99t fully qualified. I gues= s pared with a smart IDE, it would speed up boiler plate code. Though, I que= stion the value in that as there are better ways to offload the mental effor= t there, too - like reusability. icense their code under.=20 And that=E2=80=99s the context I think LLM=E2=80=99s miss. It=E2=80=99s the o= utput validation issue. If I ask for a set of X, how can I be sure you=E2=80= =99re giving me an X and not a Y? If I want GPL code, then I should be confi= dent that you=E2=80=99re only giving me GPL compatible code. That confidence= would come from validation, and that validation from a tracing of inputs. For more complex queries, this is crucial: if I don=E2=80=99t have the found= ational knowledge to know if the answer is right, then I at least need to be= able to vet the sources. hings, and I've got coworkers who are excited about it, but I am *super* ske= ptical. Same. Heck, lots of companies are on the AI bandwagon. Mine included. I also think folks are conflating the term AI, when they really mean ML, or j= ust analytics. Specifically for LLM, I=E2=80=99m having a hard time justifying the value wh= en there is no accountability of the responses. Additionally, it scares the bejesus out of me that the thing LLM needs is mo= re data, and there=E2=80=99s no standard or process to enforce data quality,= provenance, and authorization to even have it. --b

=============================================================== From: Billy ------------------------------------------------------ PI stuff, using specific terms from that API, and=20 Omg. So much this!! So my eatmemory project i posted earlier, I was looking for ways to programm= atically determine memory availability and limitations while running on Darw= in. I spent so much time searching and getting either irrelevant results, or= Apple documentation results w/ little context, no examples, and opaque data= structures, that I eventually just took a command from a stackoverflow and a= m reading it via a pipe. Dirty, I know, but I spent too much time trying to f= ind the right way=E2=80=A6 Maybe if you have the time and can decipher some of those API docs, I=E2=80=99= d appreciate it. --b

=============================================================== From: Billy ------------------------------------------------------