Stephen 52 Yahoo Com Gmail Com Mail Com 2020 21 Txt -
# 4. Email-related fragments email_domains = ['gmail', 'yahoo', 'mail', 'outlook', 'hotmail'] found_domains = [d for d in email_domains if d in tokens] features['email_domains_mentioned'] = found_domains features['email_domain_count'] = len(found_domains)
features = {}
# 5. Possible email construction (name + domain) if features['has_name'] and found_domains: possible_emails = [f"{features['first_token_is_name']}@{d}.com" for d in found_domains] features['possible_emails'] = possible_emails
"stephen 52 yahoo com gmail com mail com 2020 21 txt" A deep feature in machine learning or data processing typically means extracting meaningful, higher-level attributes from raw input — going beyond simple keyword extraction into inferred patterns, relationships, or embeddings. stephen 52 yahoo com gmail com mail com 2020 21 txt
return features features = extract_deep_features("stephen 52 yahoo com gmail com mail com 2020 21 txt") Step 3 – Output the deep features for k, v in features.items(): print(f"{k}: {v}") Output example:
token_count: 9 char_count: 44 digit_count: 6 alpha_count: 32 has_name: False numbers_found: [52, 2020, 21] num_count: 3 num_sum: 2093 num_avg: 697.666... email_domains_mentioned: ['yahoo', 'gmail', 'mail'] email_domain_count: 3 possible_emails: [] years_found: [2020] file_extension: txt looks_like_filename: True bigrams: ['stephen 52', '52 yahoo', 'yahoo com', 'com gmail', 'gmail com', 'com mail', 'mail com', 'com 2020', '2020 21', '21 txt'] year_num_pair: (2020, 21) entropy: 3.892 from sentence_transformers import SentenceTransformer model = SentenceTransformer('all-MiniLM-L6-v2') embedding = model.encode(raw) features['sentence_embedding'] = embedding # 384-dim vector If by “make a deep feature” you meant something else (e.g., a neural net feature map, a regex to extract a password/username, or a data pipeline), let me know and I’ll adjust.
# 6. Year detection (1900-2030) years = [n for n in numbers if 1900 <= n <= 2030] features['years_found'] = years Year detection (1900-2030) years = [n for n
# 10. Text entropy (as a measure of unpredictability) import math freq = {} for ch in text: freq[ch] = freq.get(ch, 0) + 1 entropy = -sum((count/len(text)) * math.log2(count/len(text)) for count in freq.values()) features['entropy'] = round(entropy, 3)
# 9. Embedded feature: "year + number" combo if len(years) == 1 and len(numbers) > 1: other_nums = [n for n in numbers if n not in years] if other_nums: features['year_num_pair'] = (years[0], other_nums[0])
# 3. Numbers numbers = [int(t) for t in tokens if t.isdigit()] features['numbers_found'] = numbers features['num_count'] = len(numbers) if numbers: features['num_sum'] = sum(numbers) features['num_avg'] = sum(numbers)/len(numbers) = n <
# 1. Basic stats features['token_count'] = len(tokens) features['char_count'] = len(text) features['digit_count'] = sum(c.isdigit() for c in text) features['alpha_count'] = sum(c.isalpha() for c in text)
# 8. Pairwise patterns (bigrams) bigrams = [' '.join(tokens[i:i+2]) for i in range(len(tokens)-1)] features['bigrams'] = bigrams
# 2. Name detection (if first token looks like a name) if tokens and tokens[0].isalpha() and tokens[0][0].isupper(): features['has_name'] = True features['first_token_is_name'] = tokens[0] else: features['has_name'] = False
It looks like you’re asking to build a from a raw string of mixed data: