在Linux環境下,使用Python進行數據傳輸可以通過多種方式實現。以下是一些常見的方法:
socket
庫進行TCP/UDP通信:import socket
# 創建TCP套接字
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(('localhost', 12345))
server_socket.listen(5)
while True:
client_socket, addr = server_socket.accept()
data = client_socket.recv(1024)
print("Received data:", data.decode())
client_socket.sendall(data)
client_socket.close()
客戶端代碼:
import socket
client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
client_socket.connect(('localhost', 12345))
client_socket.sendall(b'Hello, Server!')
data = client_socket.recv(1024)
print("Received data:", data.decode())
client_socket.close()
requests
庫進行HTTP請求:首先安裝requests
庫:
pip install requests
發送GET請求:
import requests
url = 'https://api.example.com/data'
response = requests.get(url)
print("Received data:", response.text)
發送POST請求:
import requests
url = 'https://api.example.com/data'
data = {'key': 'value'}
response = requests.post(url, json=data)
print("Received data:", response.text)
kafka-python
庫進行Kafka消息傳輸:首先安裝kafka-python
庫:
pip install kafka-python
生產者代碼:
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='localhost:9092')
producer.send('my_topic', key=b'my_key', value=b'my_value')
producer.flush()
消費者代碼:
from kafka import KafkaConsumer
consumer = KafkaConsumer('my_topic', bootstrap_servers='localhost:9092', auto_offset_reset='earliest', group_id='my_group')
for msg in consumer:
print("Received data:", msg.value.decode())
pyarrow
庫進行Parquet文件傳輸:首先安裝pyarrow
庫:
pip install pyarrow
將數據保存為Parquet文件:
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
data = {'column1': [1, 2, 3], 'column2': ['A', 'B', 'C']}
df = pd.DataFrame(data)
table = pa.Table.from_pandas(df)
pq.write_table(table, 'data.parquet')
從Parquet文件讀取數據:
import pandas as pd
import pyarrow.parquet as pq
table = pq.read_table('data.parquet')
df = table.to_pandas()
print("Received data:", df)
這些方法可以幫助你在Linux環境下使用Python進行數據傳輸。你可以根據自己的需求選擇合適的方法。