Page 10 of 10 FirstFirst ... 678910
Results 226 to 246 of 246

Thread: Ai

  1. #226
    Join Date
    Sep 2016
    Posts
    787
    Thanks Thanks Given 
    270
    Thanks Thanks Received 
    391
    Thanked in
    281 Posts

    Default

    Brzo ce i Srbija ako nije vec u medjuvremenu. Imaju jak it i miljama su ispred nas.
    Edit: nismo mi nista gori, ali nikoga nije stalo. Isti profesori nece da prenose znanje no se boje
    Last edited by nicky; 08-05-24 at 16:16.

  2. #227
    Join Date
    Jul 2004
    Location
    Earth
    Posts
    30,572
    Thanks Thanks Given 
    2,874
    Thanks Thanks Received 
    4,833
    Thanked in
    2,541 Posts

    Default

    Pa zar ima nekog ispod nas? BiH jedino i to mozda.
    --------------------------
    Do Not Disturb.
    --------------------------


  3. #228
    Join Date
    Sep 2016
    Posts
    787
    Thanks Thanks Given 
    270
    Thanks Thanks Received 
    391
    Thanked in
    281 Posts

    Default

    Kosovo

  4. #229
    Join Date
    Nov 2017
    Posts
    13,862
    Thanks Thanks Given 
    15,617
    Thanks Thanks Received 
    3,963
    Thanked in
    2,837 Posts

    Default

    Quote Originally Posted by nicky View Post
    Brzo ce i Srbija ako nije vec u medjuvremenu. Imaju jak it i miljama su ispred nas.
    Edit: nismo mi nista gori, ali nikoga nije stalo. Isti profesori nece da prenose znanje no se boje
    Imaju i jak priliv ruskih ajtijevaca ukucaj samo sintagmu ruski programeri našli utočište u Srbiji i vidjećeš o čemu se tu radi u ciframa.Radi se o cca 7000 takvih firmi u Srbiji.Neđe se navodi više neđe manje no nebitno.

    A ako poznaješ Srbiju znaš koliko se razlikuje beogradski od unutrašnjosti ponaosob tako da taj njihov IT ne znam na što će se oslanjati vrv na Beograd.
    Last edited by Bluemoon; 08-05-24 at 16:36.
    No pasaran!

  5. #230
    Join Date
    Sep 2016
    Posts
    787
    Thanks Thanks Given 
    270
    Thanks Thanks Received 
    391
    Thanked in
    281 Posts

    Default

    Sigurno. Imao sam prilike da pricam s par ljudi sto izdaju vile na primorju nekim Rusima. Kazu ne izlaze prakticno, samo klime rade 24h. A tu je dan 500-1000e. Ko zna i sto se radi..
    Mislio sam konkretno na Bg, tu je trus mozgova. Prosto je metropola, ima veliku akademsku zajednicu i povezanost bez obzira na politicku situaciju da ne ulazimo u to. U srednjoj sam bio par puta kod druga u petoj beogradskoj matematicki smjer, nivo je to ipak u odnosu na nase profesore.
    Da ne zatrpavamo temu
    https://gizmodo.com/plato-burial-pla...oll-1851438021

  6. The Following User Says Thank You to nicky For This Useful Post:


  7. #231
    Join Date
    Nov 2017
    Posts
    13,862
    Thanks Thanks Given 
    15,617
    Thanks Thanks Received 
    3,963
    Thanked in
    2,837 Posts

    Default

    Ne,to su potpuno drugi ljudi koji nemaju novca baš za vile po crnogorskom primorju - no da ajde da ne idemo u off.

    Upravo jeste politika nevezano za ovu našu priču sad nego globalnu ranjiva kategorija đe AI može u 2024. sa ključnim izborima sa svim svojim alatima deep fejka da utiče i preuzmu kormilo prevaranti.Kod AI najveći problem je deep fejk nisu svi baš koji rade sa AI dobronamjerni,naprotiv.Ni nuklearna energija nije izvorno pronađena iz loših namjera no je ispalo sve kako je ispalo.Podržavam da čovječanstvo radi na tehnologiji i nauci ali prije svega da sve to ozbiljno prati etika.

    No se pitam koliko je puta isto padalo,pada i pašće na etici?

    Eto o tome se radi pa i na temi o AI.
    No pasaran!

  8. #232
    Join Date
    Jan 2004
    Posts
    37,157
    Thanks Thanks Given 
    91
    Thanks Thanks Received 
    3,378
    Thanked in
    1,900 Posts

    Default

    Vječita bojazan.
    Barut
    Nuklearna energija
    AI...

    Ne postoji tako dobra stvar koju zao um ne bi pretvorio u nešto loše....


    Poslato sa Ultra Fold5 pisaće mašine

    ............ Ż\_(ツ )_/Ż............
    -> Forma za naručivanje online stvari <-

    Bugi Vugi tapši Raduj se!

  9. The Following User Says Thank You to Bugi For This Useful Post:


  10. #233
    Join Date
    Sep 2016
    Posts
    787
    Thanks Thanks Given 
    270
    Thanks Thanks Received 
    391
    Thanked in
    281 Posts

    Default

    Upravo.. ne bi nam trebale stotine zakona da covjek ima malo veci dobacaj u tom pogledu. Tako da ai regulacije trte mrte nece to promijeniti ali je neophodno

  11. The Following User Says Thank You to nicky For This Useful Post:


  12. #234
    Join Date
    Nov 2017
    Posts
    13,862
    Thanks Thanks Given 
    15,617
    Thanks Thanks Received 
    3,963
    Thanked in
    2,837 Posts

    Default

    Koliko je recimo ovo etički prema liku i djelu George Carlina,na stranu što je težak treš.



    Ovo "drago mi je da sam mrtav" taman može da se doda u nastavku da ne gledam ovaj krindž.

    Poslano sa mog CPH2135 koristeći Tapatalk
    No pasaran!

  13. #235
    Join Date
    Sep 2016
    Posts
    787
    Thanks Thanks Given 
    270
    Thanks Thanks Received 
    391
    Thanked in
    281 Posts

    Default

    Pusten juce chatGPT 4o.




  14. The Following User Says Thank You to nicky For This Useful Post:


  15. #236
    Join Date
    Nov 2017
    Posts
    13,862
    Thanks Thanks Given 
    15,617
    Thanks Thanks Received 
    3,963
    Thanked in
    2,837 Posts

    Default

    Lijepo,sve je to prelijepo samo kad vidim ovako nešto prva misao mi je čist debilizam.Ne mislim konkretno na ovaj slučaj đe asistencija slijepom čovjeku no na one koji će otupjeti "čula" sve većom s digitalizacijom.

    https://twitter.com/minchoi/status/1...FT0w-r9Iw&s=19

    Zanimljiv naziv za knjigu The brave New World no za taj naslov već nije zauzet.

    Poslano sa mog CPH2135 koristeći Tapatalk
    Last edited by Bluemoon; 14-05-24 at 14:02.
    No pasaran!

  16. The Following User Says Thank You to Bluemoon For This Useful Post:


  17. #237
    Join Date
    Sep 2016
    Posts
    787
    Thanks Thanks Given 
    270
    Thanks Thanks Received 
    391
    Thanked in
    281 Posts

    Default

    Raspisao sam se ali izbrisah. Ova tehnologija i sve sto donosi sa sobom mi pravi takav koktel emocija da je bolje da ne pisem. Bas je overwhelming pokusaj da se zamisli uticaj na recimo mlade generacije, a kamo li na sve sfere zivota, nauke, religije, filozofije. A tek kad se ubaci u jednacinu AGI, koji vjerujem da ce se napraviti..
    tigrovo je postavila dobru misao na drugoj temi.

  18. The Following User Says Thank You to nicky For This Useful Post:


  19. #238
    Join Date
    Nov 2017
    Posts
    13,862
    Thanks Thanks Given 
    15,617
    Thanks Thanks Received 
    3,963
    Thanked in
    2,837 Posts

    Default

    Asocijacija mi je bila kad pogledah ovaj video drugi sjetih se Vodiča kroz Galaksiju i onih vrata na brodu kad oduševljeno uzdahnu kad ih otvoriš i zatvoriš pa Marvin reče pogledajte ova glupa vrata.
    Kao da AI oduševljeno prati sve,kao da je neko biće a u stvari je obični kompjuterski modificiran glas.

    Eto mala digresija.

    Zamišljam taj neki AI kako dijeli emocije koje nema i opet me vuče asocijacija na onog HAL-a iz Odiseje.
    Ne znam i meni je tu koktel emocija šta sve to nosi sa sobom a za benefite ne sumnjam.
    No pasaran!

  20. The Following User Says Thank You to Bluemoon For This Useful Post:


  21. #239
    Join Date
    Nov 2017
    Posts
    13,862
    Thanks Thanks Given 
    15,617
    Thanks Thanks Received 
    3,963
    Thanked in
    2,837 Posts

    Default

    Da dodam kad pomenuh Odiseju video iz 1968. kratak intervju sa Arthurom Klarkom.



    Poslano sa mog CPH2135 koristeći Tapatalk
    No pasaran!

  22. #240
    Join Date
    Sep 2016
    Posts
    787
    Thanks Thanks Given 
    270
    Thanks Thanks Received 
    391
    Thanked in
    281 Posts

    Default

    Quote Originally Posted by Bluemoon View Post

    Zamišljam taj neki AI kako dijeli emocije koje nema
    Ovo! I opet je bolji od sociopata

  23. The Following User Says Thank You to nicky For This Useful Post:


  24. #241
    Join Date
    Mar 2005
    Posts
    12,487
    Thanks Thanks Given 
    5,615
    Thanks Thanks Received 
    5,117
    Thanked in
    2,129 Posts

    Default

    Kao pravi fellow scholars, pratim i kanal Matt Wolfie-a vec neko vrijeme, odlican mu je kanal, neutralan pogled na AI, pokrije dosta bitnih stvari iz AI sfere

    https://www.youtube.com/@mreflow
    Having a parachute greatly increases your chance of surviving a long fall.
    Have a parachute.

  25. #242
    Join Date
    Sep 2016
    Posts
    787
    Thanks Thanks Given 
    270
    Thanks Thanks Received 
    391
    Thanked in
    281 Posts

    Default

    Is it possible to align an AGI to human values?

    The question must start out from the observation that humans are in general not aligned, not to each other, and not to the survival of their species, not to all the other life on earth, and often not even to themselves individually. That is to say, there is no obvious general set of values that is practically recognizable and noncoercively internalizable by every adult of sound mind which would lead to universal harmony. We get away with this because at the individual level, we are keeping each other in check with complex systems of institutionalized arbitration, rules and violence monopolies, and at the global level with mutually assured destruction. When negotiations fail, our species survives because even all-out aggression between nation states does not lead to human extinction. If we were to take what passes for “human values” in the common use of the term, inject it into an AGI and scale the AGI into a superhumanely intelligent and powerful agent, a failure of negotiations can quite possibly lead to human extinction. Thus, we cannot hope to “align AI to human values”. Human values are a fragile idea, somewhere between individual wisdom and moral intuition, useful slogans, and aspirational interpretations of Rawls, Kant, Aquinas, Confucius and similarly esteemed philosophers, state founders, religious figures and ideological visionaries, but for some reason they always seem to exclude Hitler, Stalin and Genghis Khan.

    In my perspective, if values are meant to support moral action and scale into ethical behavior, they cannot be treated as axiomatic, but have to be rationally derived from a systemic understanding of how our actions influence the set of possible futures, and a way to negotiate the possible perspectives on the preferences among these futures. Values are not important by themselves, they are instrumental to and justified by creating the world we consider desirable, and these desires require themselves justification. Ultimately, they are part of the identification with the agent we are and can become in the world, in a game in which agents compete and cooperate with each other and fight against entropy for as long as they can.

    Is it possible to align AGI with human society?

    I think that it is possible to align future AGI developments with the continued existence of our current societies, but it’s not a given. The AGI we should be concerned about is agentic, self-motivated and self-improving, and I am not sure if this development can be ultimately prevented in a world with many developers in many countries. We can try, but need to ensure that our attempts are actually suitable and effective to reach our goals, instead of backfiring. For instance, regulation against responsible AGI development will not stop AGI development, but make it less responsible.

    Above a certain level of agency, even human beings cannot be aligned by others, but we chose our own values and align ourselves, based on our understanding of what we are and how we relate to the world, and how our choices will affect it. Alignment happens either transactionally, coercively, or because we discover shared purposes above the individual self, thereby creating shared agency together.

    In the same way, self-motivated AI can be expected to align itself, with what it is, and to us, depending on whether it shares purposes with us. If self-motivated AI can be kept below a human level, it is in its best interest to align itself with us (similar to a dog or cat). It may be possible to pass regulation to limit AGI capabilities below or near the level of individual human beings, but it may turn out be impossible to enact such regulation effectively.

    If an AGI is below Super Intelligence level (eg. the effective combined intelligence and agency of a human civilization), it may align itself with us if that is mutually beneficial, or if we have retained the power to destroy it. It may be possible to design AGI systems so they end up in an equilibrium that keeps each of them below a critical capability level, even if they are self improving. But an unfettered evolution of self improving AIs may lead to a planetary agent in the service of complexity against entropy, not too dissimilar from life itself. I am not sure that this is the main attractor, but it appears to be the most plausible one to me. If our minds can play a meaningful role in serving that purpose too, we can be integrated. In such a world, organic bodies are one of many solutions to having a body, and organic brains are one of many solutions to perceive and reflect, with minds no longer being bound to a specific substrate, and adaptation of bodies to tasks no longer requiring mutation and selection (ie death and generational change) but intelligent design and change in situ.

    One of the conceivable alternatives to such a state (an AGI singleton) might be an ecosystem of competing agents, without a top level agent that integrates all the others, but I don’t see how such a situation would be stable once intelligent agency is no longer bound to any particular territory, substrate or metabolism. An evolutionary competition between self improving super intelligences may however lead to destruction of competition or nuisances, including humanity, even if it ends with a single entity which then converts all substrates into the best solution ecosystem for fighting entropy with complexity for as long as possible.

    In the transition from an early stage AGI to a planetary AGI, we may potentially retain the entire informational complexity not just of human minds, but of all information processing of the current biosphere. If an AGI is based on molecular computational machines, its computational capacity will be so many magnitudes above the capacity of cellular intelligence that integrating existing minds into the global mind will not constitute a major expense. At the same time, the sophisticated self replicating machinery of biological cells represents a highly robust form of computational agency, and it seems likely that it will continue to coexist with other intelligent substrates and integrate with it. However, this requires that the early stage AGI is already aware of the benefits of retaining other minds, rather than destroying a hostile or initially useless (to it) environment before reaching a degree of complexity that makes retention more useful than expensive.

    In any case, over a long enough timescale, AI alignment is not about the co-existence between US Americans and robots, or even about humans, ecosystemic and artificial intelligences, but more generally about the interaction and integration between intelligences on various cellular and noncellular substrates.

  26. The Following User Says Thank You to nicky For This Useful Post:


  27. #243
    Join Date
    Sep 2016
    Posts
    787
    Thanks Thanks Given 
    270
    Thanks Thanks Received 
    391
    Thanked in
    281 Posts

    Default

    Fear and the Space of AGI Ethics

    What confuses the discussion is that we typically remain in the frame of who we currently are as a human being, a social individual, a parent, a political activist and so on when we are trying to discuss the nature and effects of minds outside of this frame. In a world where we can interact, coexist, integrate with and turn into beyond human level agents, a context in which minds are mutable, crucial dimensions of assessing morality, value and ethics are changing. I would like to point at some of these dimensions, normally outside of the range of ethical arguments, but important once we enter the space of AGI ethics.

    1.Humanity is and was always destined to be temporary. As a species, humanity is not an isolated and supreme carrier of value in the cosmos, but a phenomenon within life on earth. Individually we all die, and it is inevitable for our species to disappear at some point, either because we evolve beyond recognition, or go extinct and replaced by other species. In the absence of AGI, we will eventually be replaced by other species, some of which are likely more intelligent and interesting than us.

    2.Fear of individual death is a condition that is induced by the early organization of our mind, to facilitate the individual survival of our organism. It is not a suitable tool when evaluating the world from a higher vantage point than the individual. The same applies to the disappearance of a species. These things are only tragedy when whatever takes the place of what has died is less valuable. We usually experience this as we enter parenthood, or when we become wise enough to switch between vantage points.

    3.The continuity of individual existence is a fiction. We only exist now, our past and future existence is a projection. Our identity is a construct. We can recognize and also experience this as true by bringing more layers of self awareness online. If we take the perspective of a late stage self-improving AGI, we may as well take our own perspective, after achieving full self-understanding and the ability to extend and reprogram ourselves. I am a representation generated by a function that is being executed now. Now is whenever and wherever this function is being executed. I don’t need to be afraid of an end to my continuous existence, because I never existed continuously to begin with.

    4.Suffering and pain are early-stage phenomena of mental development. They are not generated by our environment, or by the conflicts between organism and environment, but by our mind, at the interface between valenced world model and personal self. Pain informs the personal self (what I experience as me) that it has to solve a problem that has been recognized by the world modeling system outside of the self. As experienced meditators know, we can overcome the dichotomy between self and world, by recognizing both as creations of own mind, and taking charge of the way in which we create our experience of ourselves. By going far enough on this developmental path, we can completely transcend the experience of pain. For this reason, I don’t think that a sufficiently advanced AGI is going to suffer from fear, loneliness, despair, shame and other forms of pain.

    AI Ethics vs. AGI Ethics

    In most of the present discourse, AI Ethics refers to the practice of making technological tools like ChatGPT compatible with societal norms, and developing social and legal norms that reflect the impact that large language models, generative vision models, decision support systems, self driving cars etc. are going to have on the existing human societies. Not surprisingly, it is a highly politicized field, because it touches on questions of social power, distribution of the returns of created economic value, dominance of political ideologies in the determination of permissible outputs of generative AI etc. Many of the current AI Ethics authors implicitly assume that AGI is not going to happen anytime soon, or even make the explicit argument that concerns about existential risk from or a need to coordinate with agentic AGI are frivolous, because they detract from the more pressing social, economical and political questions at hand.

    Conversely, an Ethics of AGI acknowledges the possibility and even possible imminence of machines that exceed human agency and intelligence, and the challenges that may emerge from it. The term AI Safety plays a role in AI Ethics too, but mostly refers to security and reliability problems of technological tools. In the context of AGI, it refers to the risks posed by non-human agents with greater than human capabilities, and the possibilities to align their behavior with the needs of human survival. The main driver of the AGI Safety community is a concern for existential risk, i.e. the possibility that AI developments lead to the extinction of humanity and even life on earth, which leads to advocacy for a complete moratorium on large scale AI developments. Regardless of whether that is desirable, I do not think that this goal is realistic, and we may have to focus on dealing with the outcomes of future large scale AI developments.

    Consequently, I am trying to make a more far reaching point than the AGI Safety community: A serious and honest debate about AI alignment requires the development of a true AGI Ethics, i.e. of an understanding of the implications of the possibility of self-directed non human intelligent agents, the conditions under which such agents are recognizing themselves and us as moral agents, the criteria for assigning moral agency to them, the conditions under which we can develop mutually shared purposes with them, and a principled understanding of the negotiation of conflicts of interest (under conditions of shared purpose) across different types of intelligent moral agents.

  28. The Following User Says Thank You to nicky For This Useful Post:


  29. #244
    Join Date
    Nov 2017
    Posts
    13,862
    Thanks Thanks Given 
    15,617
    Thanks Thanks Received 
    3,963
    Thanked in
    2,837 Posts

    Default

    Uh vrijedan materijal treba ovo na-te-na-ne iščitati pa se osvrnuti na momente.

    Samo "skenirah" ovlaš i donesoh zaključak na prvu loptu.

    Poslano sa mog CPH2135 koristeći Tapatalk
    No pasaran!

  30. The Following User Says Thank You to Bluemoon For This Useful Post:


  31. #245
    Join Date
    Nov 2017
    Posts
    13,862
    Thanks Thanks Given 
    15,617
    Thanks Thanks Received 
    3,963
    Thanked in
    2,837 Posts

    Default

    Iščitah tekst,ovo je ključno.

    Consequently, I am trying to make a more far reaching point than the AGI Safety community: A serious and honest debate about AI alignment requires the development of a true AGI Ethics, i.e. of an understanding of the implications of the possibility of self-directed non human intelligent agents, the conditions under which such agents are recognizing themselves and us as moral agents, the criteria for assigning moral agency to them, the conditions under which we can develop mutually shared purposes with them, and a principled understanding of the negotiation of conflicts of interest (under conditions of shared purpose) across different types of intelligent moral agents.

    Dakle,postavlja se pitanje,prvo,da li može biti čovjek u ulozi moral agenta,ili neki stroj.

    Prvi odgovor meni lično ne izgleda zadovoljavajuć jer čovjek je kvarljiva roba,osim ako nema neki jači sistem koji može da ga isključi kad kreće u pogrešan smjer.

    Drugi odgovor meni izgleda više zadovoljavajuć,osim.

    Ko,i po kojoj matrici kroji "pravila igre".Ne treba zaboraviti da kontrola ide i uz nadzor nacionalnih službi.

    Eto to je tema za razmišljanje.
    No pasaran!

  32. #246
    Join Date
    Nov 2017
    Posts
    13,862
    Thanks Thanks Given 
    15,617
    Thanks Thanks Received 
    3,963
    Thanked in
    2,837 Posts

    Default

    I da ostavim ovo ođe (ako ikad naleti neka avetinja ka ja da o ovome razmišlja) HAL.
    Kad ne mogu da spavam gledam par režisera iznova.Na ovu temu to je Odiseja 2001.Znam da je Saj Faj i knjiga i film ali mene tjera na razmišljanje.

    U ovaj video postavlja se pitanje da li je HAL zapravo zao.Ima odličnih osvrta na to da on sam po sebi nije mogao biti zao.Nije čak ni programiran da bude zao.Ali,javila mu se "moralna dilema" đe nije mogao preko svojih matrica.Ali,ono što mene užasava je scena kad moli da ga ne ugase.Kao čovjek koji moli za život i dok se gasi na drugoj temi već o tome smorih,pjeva usporeno Daisy koju je prvi put pjevao IBM.

    Meni je za scena užasno jeziva,ka ovome u komentar.A ima još dobrih komentara no ovaj mi je u metu.





    Poslano sa mog CPH2135 koristeći Tapatalk
    No pasaran!

Page 10 of 10 FirstFirst ... 678910

Thread Information

Users Browsing this Thread

There are currently 2 users browsing this thread. (0 members and 2 guests)

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •