I let ChatGPT make ChatGPT.steemCreated with Sketch.

in chatgpt •  2 years ago 

pexels-photo-207580.webp

Python is a strong programming language that is not difficult to
learn and has proficient significant-level information structures and a
straightforward yet viable way to deal with object-situated
programming. It’s exquisite grammar and dynamic composing, along
with its deciphered nature, make it an optimal language for
prearranging and quick application advancement in numerous areas
on most stages. Python permits required and discretionary
contentions, watchword contentions, and, surprisingly, erratic
contention records. Python source code and installers are accessible
for download for all variants. The Python Programming Establishment
advances, secures, and progresses the Python programming language,
and supports and works with the development of a different and global
local area of Python software engineers.
This time I asked ChatGPT (version is gpt-4) to write a natural
language processing program using RNN with Python. If this succeeds,
it will be as if AI had a baby, so to speak.
GPT-4 is the fourth-age language model in the GPT series, created
by OpenAI, which guarantees critical progressions in the field of
normal language handling (NLP). It has a bigger model size (more
than one trillion), better multilingual capacities, and worked on
context-oriented understanding, and thinking abilities than its
ancestor, GPT-3. Nonetheless, it is critical to take note that while
GPT-4 has preferred multilingual capacities over its ancestors, it
might in any case perform preferred in English over in various dialects
because of elements like information accessibility and quality.
So, I decided to type in the prompt in English when creating the
program as well.
Let's display the resulting code out of the blue. In the beginning,
there were some errors, but they were corrected by my programming
skills.

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
import numpy as np
from keras.models import Sequential
from keras.layers import LSTM, Dense
from keras.preprocessing.text import Tokenizer
from keras.utils import to_categorical
with open('sen6.txt', 'r', encoding='utf-8') as f:
text = f.read()
tokenizer = Tokenizer(char_level=True)
tokenizer.fit_on_texts([text])
encoded_text = tokenizer.texts_to_sequences([text])[0]
vocab_size = len(tokenizer.word_index) + 1
input_length = 10
data = []
labels = []
for i in range(len(encoded_text) - input_length):
data.append(encoded_text[i:i+input_length])
labels.append(encoded_text[i+input_length])
data = np.array(data)
data = to_categorical(data, num_classes=vocab_size)
labels = to_categorical(labels, num_classes=vocab_size)
model = Sequential()
model.add(LSTM(180, input_shape=(input_length, vocab_size)))
model.add(Dense(vocab_size, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
batch_size = int(len(text)/10)
num_batches = data.shape[0] // batch_size
loss = float('inf')
epoch = 0
while loss > 0.07:
epoch += 1
print(f'Epoch: {epoch}, Loss: {loss:.4f}')
for i in range(num_batches):
start = i * batch_size
end = start + batch_size
loss = model.train_on_batch(data[start:end], labels[start:end])

def generate_text(seed_text, num_chars):
result = ""
for _ in range(num_chars):
encoded_seed = tokenizer.texts_to_sequences([seed_text])[0]
encoded_seed = to_categorical(encoded_seed, num_classes=vocab_size)
if len(encoded_seed) < input_length:
encoded_seed = np.pad(encoded_seed, ((input_length - len(encoded_seed),
0), (0, 0)), 'constant')
else:
encoded_seed = encoded_seed[-input_length:]
encoded_seed = encoded_seed[np.newaxis, ...]
pred = model.predict(encoded_seed)
pred_char_index = np.argmax(pred)
for char, index in tokenizer.word_index.items():
if index == pred_char_index:
result += char
seed_text += char
seed_text = seed_text[1:]
if '' in result:
return result[:result.index('
')]
break
return result
model.save('my_model.h5')
for k in range(100000):
i = input(">> ")
i = i + "**"
generated_text = generate_text(i, 100)
print(generated_text)

When actually executed, it learned sentences (in whatever
language) inside text data as teacher data and was able to actually talk
to the resulting model. The model is automatically saved to a file.
Accuracy, however, still needs to be improved. For example, it outputs
words with spelling that does not exist, and the meaning of the
sentences did not make sense at all. I guess it's not as good as its
parents.
It might be better to learn sentences word by word instead of letter
by letter. In any case, there is a lot of room for improvement, so I will
try to publish the improved version again

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE STEEM!