èªè ã®çæ§ãããã«ã¡ã¯ïŒ
ãã®èšäºã§ã¯ãNeoflex ã®ããã° ããŒã¿ ãœãªã¥ãŒã·ã§ã³ ããžãã¹åéã®äž»èŠã³ã³ãµã«ã¿ã³ãããApache Spark ã䜿çšããŠå¯å€æ§é ã®ã¹ãã¢ããã³ããæ§ç¯ããããã®ãªãã·ã§ã³ã«ã€ããŠè©³ãã説æããŸãã
ããŒã¿åæãããžã§ã¯ãã®äžç°ãšããŠã倧ãŸãã«æ§é åãããããŒã¿ã«åºã¥ããŠã·ã§ãŒã±ãŒã¹ãæ§ç¯ããã¿ã¹ã¯ãé »ç¹ã«çºçããŸãã
éåžžããããã¯ãJSON ãŸã㯠XML ã®åœ¢åŒã§ä¿åããããã°ããŸãã¯ããŸããŸãªã·ã¹ãã ããã®å¿çã§ãã ããŒã¿ã¯ Hadoop ã«ã¢ããããŒããããããããã¹ãã¢ããã³ããæ§ç¯ããå¿ èŠããããŸãã ããšãã°ãImpala ãéããŠãäœæãããã¹ãã¢ããã³ããžã®ã¢ã¯ã»ã¹ãæŽçã§ããŸãã
ãã®å Žåã察象ãšãªãåºé ã®ã¬ã€ã¢ãŠãã¯äºåã«äžæã§ããã ããã«ãã¹ããŒã ã¯ããŒã¿ã«äŸåãããããäºåã«äœæããããšã¯ã§ããŸããããŸãããã®éåžžã«åŒ±ãæ§é ã®ããŒã¿ãæ±ã£ãŠããããã§ãã
ããšãã°ãä»æ¥ã次ã®å¿çãèšé²ãããŸãã
{source: "app1", error_code: ""}
ãããŠææ¥ãåãã·ã¹ãã ãã次ã®å¿çãè¿ãããŸãã
{source: "app1", error_code: "error", description: "Network error"}
ãã®çµæãã¹ãã¢ããã³ãã«å¥ã®ãã£ãŒã«ãã説æããè¿œå ãããã¯ãã§ããããããè¿œå ããããã©ããã¯èª°ã«ãããããŸããã
ãã®ãããªããŒã¿ã«ããŒããäœæããã¿ã¹ã¯ã¯ããªãæšæºçã§ãããSpark ã«ã¯ãã®ããã®ããŒã«ãå€æ°ãããŸãã ãœãŒã¹ ããŒã¿ã®è§£æã«ã€ããŠã¯ãJSON ãš XML ã®äž¡æ¹ããµããŒããããŠããããããŸã§ç¥ãããŠããªãã£ãã¹ããŒãã«ã€ããŠã¯ schemaEvolution ã®ãµããŒããæäŸãããŸãã
äžèŠãããšã解決çã¯ç°¡åã«èŠããŸãã JSON ãå«ããã©ã«ããŒãååŸãããããããŒã¿ãã¬ãŒã ã«èªã¿åãå¿ èŠããããŸãã Spark ã¯ã¹ããŒããäœæãããã¹ããããããŒã¿ãæ§é ã«å€æããŸãã 次ã«ãã¹ãã¢ããã³ãã Hive ã¡ã¿ã¹ãã¢ã«ç»é²ããããšã§ããã¹ãŠãå¯æšçŽ°å·¥ã§ä¿åããå¿ èŠããããŸãããã㯠Impala ã§ããµããŒããããŠããŸãã
ãã¹ãŠãã·ã³ãã«ãªããã§ãã
ãã ããããã¥ã¡ã³ãå
ã®çãäŸããã¯ãå®éã«å€ãã®åé¡ãã©ãåŠçãããã¯æ確ã§ã¯ãããŸããã
ãã®ããã¥ã¡ã³ãã§ã¯ãã¹ãã¢ããã³ããäœæããããã®ã¢ãããŒãã§ã¯ãªããJSON ãŸã㯠XML ãããŒã¿ãã¬ãŒã ã«èªã¿åãããã®ã¢ãããŒãã«ã€ããŠèª¬æããŠããŸãã
ã€ãŸããJSON ãèªã¿åã£ãŠè§£æããæ¹æ³ãåçŽã«ç€ºããŠããŸãã
df = spark.read.json(path...)
Spark ã§ããŒã¿ãå©çšã§ããããã«ããã«ã¯ãããã§ååã§ãã
å®éã®ã·ããªãªã¯ãåã«ãã©ã«ããŒãã JSON ãã¡ã€ã«ãèªã¿åã£ãŠããŒã¿ãã¬ãŒã ãäœæãããããã¯ããã«è€éã§ãã ç¶æ³ã¯æ¬¡ã®ããã«ãªããŸãããã§ã«ç¹å®ã®ã·ã§ãŒã±ãŒã¹ããããæ°ããããŒã¿ãæ¯æ¥å°çããŸããã¹ããŒã ãç°ãªãå¯èœæ§ãããããšãå¿ããã«ãããããã·ã§ãŒã±ãŒã¹ã«è¿œå ããå¿ èŠããããŸãã
ã¹ãã¢ããã³ããæ§ç¯ããããã®éåžžã®ã¹ããŒã ã¯æ¬¡ã®ãšããã§ãã
1ã¹ãããã ããŒã¿ã¯ Hadoop ã«ããŒãããããã®åŸæ¯æ¥è¿œå ããŒããããŠæ°ããããŒãã£ã·ã§ã³ã«è¿œå ãããŸãã çµæãšããŠãæ¥ããšã«ããŒãã£ã·ã§ã³åããããœãŒã¹ ããŒã¿ãå«ããã©ã«ããŒãäœæãããŸãã
2ã¹ãããã åæããŒãäžã«ããã®ãã©ã«ããŒã¯ Spark ã䜿çšããŠèªã¿åããã解æãããŸãã çµæã®ããŒã¿ãã¬ãŒã ã¯ãåæå¯èœãªåœ¢åŒ (å¯æšçŽ°å·¥ãªã©) ã§ä¿åãããImpala ã«ã€ã³ããŒãã§ããŸãã ããã«ããããããŸã§ã«èç©ããããã¹ãŠã®ããŒã¿ãå«ãã¿ãŒã²ãã ã¹ãã¢ããã³ããäœæãããŸãã
3ã¹ãããã ããŠã³ããŒããäœæãããã¹ãã¢ããã³ããæ¯æ¥æŽæ°ãããŸãã
å¢åèªã¿èŸŒã¿ã®åé¡ãåºé ãåå²ããå¿
èŠæ§ãããã³åºé ã®äžè¬çãªã¬ã€ã¢ãŠãã®ãµããŒãã®åé¡ãçããŸãã
äŸãæããŠã¿ãŸãããã ãªããžããªãæ§ç¯ããæåã®ã¹ããããå®è£ ããããã©ã«ããŒãžã® JSON ãã¡ã€ã«ã®ã¢ããããŒããæ§æãããŠãããšããŸãã
ããããããŒã¿ãã¬ãŒã ãäœæããã·ã§ãŒã±ãŒã¹ãšããŠä¿åããããšã¯åé¡ãããŸããã ããã¯ãSpark ããã¥ã¡ã³ãã§ç°¡åã«èŠã€ããããšãã§ããæåã®ã¹ãããã§ãã
df = spark.read.option("mergeSchema", True).json(".../*")
df.printSchema()
root
|-- a: long (nullable = true)
|-- b: string (nullable = true)
|-- c: struct (nullable = true) |
|-- d: long (nullable = true)
ãã¹ãŠãé 調ã®ããã§ãã
JSON ãèªã¿åã£ãŠè§£æããããŒã¿ãã¬ãŒã ãå¯æšçŽ°å·¥ãšããŠä¿åããä»»æã®äŸ¿å©ãªæ¹æ³ã§ Hive ã«ç»é²ããŸãã
df.write.format(âparquetâ).option('path','<External Table Path>').saveAsTable('<Table Name>')
ã·ã§ãŒã±ãŒã¹ãæã«å ¥ããŸãã
ããããç¿æ¥ããœãŒã¹ããæ°ããããŒã¿ãè¿œå ãããŸããã JSON ãå«ããã©ã«ããŒãšããã®ãã©ã«ããŒã«åºã¥ããŠäœæãããã¹ãã¢ããã³ãããããŸãã ãœãŒã¹ããããŒã¿ã®æ¬¡ã®éšåãããŒãããåŸãã¹ãã¢ããã³ãã«ã¯ XNUMX æ¥åã®ããŒã¿ãäžè¶³ããŸãã
è«ççãªè§£æ±ºçã¯ãã¹ãã¢ããã³ããæ¥ããšã«ããŒãã£ã·ã§ã³åããããšã§ããããã«ãããç¿æ¥ããšã«æ°ããããŒãã£ã·ã§ã³ãè¿œå ã§ããããã«ãªããŸãã ãã®ã¡ã«ããºã ãããç¥ãããŠãããSpark ã§ã¯ããŒãã£ã·ã§ã³ãåå¥ã«èšé²ã§ããŸãã
ãŸããåæããŒããå®è¡ããäžèšã®ããã«ããŒã¿ãä¿åããããŒãã£ã·ã§ã³åå²ã®ã¿ãè¿œå ããŸãã ãã®ã¢ã¯ã·ã§ã³ã¯ã¹ãã¢ããã³ãã®åæåãšåŒã°ããXNUMX åã ãå®è¡ãããŸãã
df.write.partitionBy("date_load").mode("overwrite").parquet(dbpath + "/" + db + "/" + destTable)
ç¿æ¥ãæ°ããããŒãã£ã·ã§ã³ã®ã¿ãããŠã³ããŒãããŸãã
df.coalesce(1).write.mode("overwrite").parquet(dbpath + "/" + db + "/" + destTable +"/date_load=" + date_load + "/")
æ®ã£ãŠããã®ã¯ãHive ã«åç»é²ããŠã¹ããŒããæŽæ°ããããšã ãã§ãã
ããããããã§åé¡ãçºçããŸãã
æåã®åé¡ã é ããæ©ãããçµæãšããŠåŸãããå¯æšçŽ°å·¥ã¯èªã¿åããªããªããŸãã ããã¯ãå¯æšçŽ°å·¥ãš JSON ã§ã®ç©ºã®ãã£ãŒã«ãã®æ±ãæ¹ãç°ãªãããã§ãã
å žåçãªç¶æ³ãèããŠã¿ãŸãããã ããšãã°ãæšæ¥ JSON ãå°çããŸããã
ÐÐµÐœÑ 1: {"a": {"b": 1}},
ãããŠä»æ¥ãåã JSON ã¯æ¬¡ã®ããã«ãªããŸãã
ÐÐµÐœÑ 2: {"a": null}
XNUMX ã€ã®ç°ãªãããŒãã£ã·ã§ã³ããããããããã« XNUMX è¡ããããšããŸãã
ãœãŒã¹ ããŒã¿å
šäœãèªã¿åããšãSpark ã¯åãå€æã§ããããã«ãªãããaãã INT åã®ãã¹ãããããã£ãŒã«ããbããæã€ãæ§é äœãåã®ãã£ãŒã«ãã§ããããšãç解ããŸãã ãã ããåããŒãã£ã·ã§ã³ãåå¥ã«ä¿åãããå Žåã¯ãäºææ§ã®ãªãããŒãã£ã·ã§ã³ ã¹ããŒã ãæã€å¯æšçŽ°å·¥ãäœæãããŸãã
df1 (a: <struct<"b": INT>>)
df2 (a: STRING NULLABLE)
ãã®ç¶æ³ã¯ããç¥ãããŠããããããœãŒã¹ ããŒã¿ã解æãããšãã«ç©ºã®ãã£ãŒã«ããåé€ãããªãã·ã§ã³ãç¹å¥ã«è¿œå ãããŸããã
df = spark.read.json("...", dropFieldIfAllNull=True)
ãã®å Žåãå¯æšçŽ°å·¥ã¯äžç·ã«èªã¿åãããšãã§ããããŒãã£ã·ã§ã³ã§æ§æãããŸãã
å®éã«ãã£ãããšã®ãã人ã¯èŠç¬ãããã§ããããã ãªãïŒ ã¯ããããããããš 1 ã€ã®ç¶æ³ãçºçããå¯èœæ§ãé«ãããã§ãã ãããã¯1.1ã€ã ãããã¯XNUMXã€ã XNUMX ã€ç®ã¯ãã»ãŒç¢ºå®ã§ãããJSON ãã¡ã€ã«ããšã«æ°å€åã®èŠãç®ãç°ãªããšããããšã§ãã ããšãã°ã{intField: XNUMX} ã {intField: XNUMX} ãªã©ã§ãã ãã®ãããªãã£ãŒã«ãã XNUMX ã€ã®ãããã«è¡šç€ºãããå Žåãã¹ããŒãã®ããŒãžã«ãã£ãŠãã¹ãŠãæ£ããèªã¿åãããæãæ£ç¢ºãªåãåŸãããŸãã ãã ããç°ãªãå Žåã¯ãäžæ¹ã¯ intField: int ã«ãªããããäžæ¹ã¯ intField: double ã«ãªããŸãã
ãã®ç¶æ³ã«å¯ŸåŠããããã«ã次ã®ãã©ã°ããããŸãã
df = spark.read.json("...", dropFieldIfAllNull=True, primitivesAsString=True)
ããã§ãããŒãã£ã·ã§ã³ãé 眮ããããã©ã«ããŒãã§ããŸããããããåäžã®ããŒã¿ãã¬ãŒã ãšåºé å šäœã®æå¹ãªå¯æšçŽ°å·¥ã«èªã¿åãããšãã§ããŸãã ã¯ãïŒ ãããã
ããŒãã«ã Hive ã«ç»é²ããããšãèŠããŠããå¿ èŠããããŸãã Hive ã§ã¯ãã£ãŒã«ãåã®å€§æåãšå°æåãåºå¥ãããŸããããparquet ã§ã¯å€§æåãšå°æåãåºå¥ãããŸãã ãããã£ãŠãã¹ããŒã field1: int ãš Field1: int ãæã€ããŒãã£ã·ã§ã³ã¯ãHive ã§ã¯åãã§ãããSpark ã§ã¯åãã§ã¯ãããŸããã ãã£ãŒã«ãåãå°æåã«å€æŽããããšãå¿ããªãã§ãã ããã
ãã®åŸã¯ãã¹ãŠé 調ã®ããã§ãã
ãã ãããã¹ãŠãããã»ã©åçŽã§ã¯ãããŸããã XNUMX çªç®ã®ããããããç¥ãããŠããåé¡ãçºçããŸãã æ°ããããŒãã£ã·ã§ã³ã¯ããããåå¥ã«ä¿åããããããããŒãã£ã·ã§ã³ ãã©ã«ããŒã«ã¯ Spark ãµãŒãã¹ ãã¡ã€ã« (_SUCCESS æäœæåãã©ã°ãªã©) ãå«ãŸããŸãã ããã«ãããå¯æšçŽ°å·¥ãããããšãããšãšã©ãŒãçºçããŸãã ãããåé¿ããã«ã¯ãSpark ããµãŒãã¹ ãã¡ã€ã«ããã©ã«ããŒã«è¿œå ããªãããã«æ§æããå¿ èŠããããŸãã
hadoopConf = sc._jsc.hadoopConfiguration()
hadoopConf.set("parquet.enable.summary-metadata", "false")
hadoopConf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false")
çŸåšãæ¯æ¥ãæ°ããå¯æšçŽ°å·¥ã®ããŒãã£ã·ã§ã³ãã¿ãŒã²ããã®åºé ãã©ã«ããŒã«è¿œå ãããŠããããã®æ¥ã®è§£æãããããŒã¿ãããã«é 眮ãããŠããããã§ãã ããŒã¿åã競åããããŒãã£ã·ã§ã³ããªãããšãäºåã«ç¢ºèªããŸããã
ããããç§ãã¡ã¯ XNUMX çªç®ã®åé¡ã«çŽé¢ããŠããŸãã çŸåšãäžè¬çãªã¹ããŒãã¯äžæã§ãããããã«ãHive ã§ã¯ããŒãã«ã®ã¹ããŒããééã£ãŠããŸããããã¯ãæ°ããããŒãã£ã·ã§ã³ããšã«ã¹ããŒãã«æªã¿ãçããŠããå¯èœæ§ãé«ãããã§ãã
ããŒãã«ãåç»é²ããå¿ èŠããããŸãã ããã¯ç°¡åã«å®è¡ã§ããŸããã¹ãã¢ããã³ãã®å¯æšçŽ°å·¥ãå床èªã¿åããã¹ããŒããååŸããããã«åºã¥ã㊠DDL ãäœæããŸããããã«ããããã©ã«ããŒã Hive ã«å€éšããŒãã«ãšããŠåç»é²ããã¿ãŒã²ãã ã¹ãã¢ããã³ãã®ã¹ããŒããæŽæ°ã§ããŸãã
ç§ãã¡ã¯ XNUMX çªç®ã®åé¡ã«çŽé¢ããŠããŸãã åããŠããŒãã«ãç»é²ãããšãã¯ãSpark ã«äŸåããŸããã ä»ã¯ãããèªåãã¡ã§è¡ãããã«ãªããŸããããå¯æšçŽ°å·¥ã®ãã£ãŒã«ã㯠Hive ã§èš±å¯ãããŠããªãæåã§å§ãŸãå¯èœæ§ãããããšãèŠããŠããå¿ èŠããããŸãã ããšãã°ãSpark ã¯ãcorrupt_recordããã£ãŒã«ãã§è§£æã§ããªãã£ãè¡ãã¹ããŒããŸãã ãã®ãããªãã£ãŒã«ãã¯ããšã¹ã±ãŒãããªããš Hive ã«ç»é²ã§ããŸããã
ãããç解ãããšã次ã®å³ãåŸãããŸãã
f_def = ""
for f in pf.dtypes:
if f[0] != "date_load":
f_def = f_def + "," + f[0].replace("_corrupt_record", "`_corrupt_record`") + " " + f[1].replace(":", "`:").replace("<", "<`").replace(",", ",`").replace("array<`", "array<")
table_define = "CREATE EXTERNAL TABLE jsonevolvtable (" + f_def[1:] + " ) "
table_define = table_define + "PARTITIONED BY (date_load string) STORED AS PARQUET LOCATION '/user/admin/testJson/testSchemaEvolution/pq/'"
hc.sql("drop table if exists jsonevolvtable")
hc.sql(table_define)
ã³ãŒã ("_corrupt_record", "`_corrupt_record`") + " " + f[1].replace(":", "`:").replace("<", "<`").replace(",", ",`").replace("é å<`", "é å<") ã€ãŸãã次ã®ä»£ããã«å®å šãª DDL ãå®è¡ããŸãã
create table tname (_field1 string, 1field string)
ã_field1, 1fieldãã®ãããªãã£ãŒã«ãåã䜿çšãããšããã£ãŒã«ãåããšã¹ã±ãŒããããå®å šãª DDL ãäœæãããŸã: create table `tname` (`_field1` string, `1field` string)ã
çåãçããŸã: å®å šãªã¹ããŒããå«ãããŒã¿ãã¬ãŒã ã (PF ã³ãŒãå ã§) æ£ããååŸããã«ã¯ã©ãããã°ããã§ãããã? ãã®pfãå ¥æããã«ã¯ã©ãããã°ããã§ããïŒ ãããïŒã€ç®ã®åé¡ã§ãã 察象ã®åºé ã®å¯æšçŽ°å·¥ã®ãã¡ã€ã«ãå«ãŸãããã©ã«ããŒãããã¹ãŠã®ããŒãã£ã·ã§ã³ã®å³ãå床èªã¿åããŸãã? ãã®æ¹æ³ã¯æãå®å šã§ãããå°é£ã§ãã
ã¹ããŒãã¯ãã§ã« Hive ã«ãããŸãã ããŒãã«å šäœã®ã¹ããŒããšæ°ããããŒãã£ã·ã§ã³ãçµåããããšã§ãæ°ããã¹ããŒããååŸã§ããŸãã ããã¯ãHive ããããŒãã« ã¹ããŒããååŸãããããæ°ããããŒãã£ã·ã§ã³ã®ã¹ããŒããšçµã¿åãããå¿ èŠãããããšãæå³ããŸãã ããã¯ãHive ãããã¹ã ã¡ã¿ããŒã¿ãèªã¿åãããããäžæãã©ã«ããŒã«ä¿åããSpark ã䜿çšããŠäž¡æ¹ã®ããŒãã£ã·ã§ã³ãäžåºŠã«èªã¿åãããšã§å®è¡ã§ããŸãã
åºæ¬çã«ãHive ã®å ã®ããŒãã« ã¹ããŒããšæ°ããããŒãã£ã·ã§ã³ãªã©ãå¿ èŠãªãã®ã¯ãã¹ãŠæã£ãŠããŸãã ããŒã¿ããããŸãã æ®ã£ãŠããã®ã¯ãã¹ãã¢ããã³ã ã¹ããŒããšäœæãããããŒãã£ã·ã§ã³ããã®æ°ãããã£ãŒã«ããçµã¿åãããæ°ããã¹ããŒããååŸããããšã ãã§ãã
from pyspark.sql import HiveContext
from pyspark.sql.functions import lit
hc = HiveContext(spark)
df = spark.read.json("...", dropFieldIfAllNull=True)
df.write.mode("overwrite").parquet(".../date_load=12-12-2019")
pe = hc.sql("select * from jsonevolvtable limit 1")
pe.write.mode("overwrite").parquet(".../fakePartiton/")
pf = spark.read.option("mergeSchema", True).parquet(".../date_load=12-12-2019/*", ".../fakePartiton/*")
次ã«ãåã®ãã©ã°ã¡ã³ããšåæ§ã«ãããŒãã«ç»é² DDL ãäœæããŸãã
ãã§ãŒã³å
šäœãæ£ããæ©èœããå Žåãã€ãŸããåæããŒãããããããŒãã«ã Hive ã§æ£ããäœæãããå ŽåãæŽæ°ãããããŒãã« ã¹ããŒããååŸããŸãã
æåŸã®åé¡ã¯ãHive ããŒãã«ã«ããŒãã£ã·ã§ã³ãç°¡åã«è¿œå ã§ããªãããšã§ããããŒãã£ã·ã§ã³ãå£ããŠããŸãããã§ãã Hive ã«ããŒãã£ã·ã§ã³æ§é ã匷å¶çã«ä¿®æ£ãããå¿ èŠããããŸãã
from pyspark.sql import HiveContext
hc = HiveContext(spark)
hc.sql("MSCK REPAIR TABLE " + db + "." + destTable)
JSON ãèªã¿åããããã«åºã¥ããŠã¹ãã¢ããã³ããäœæãããšããåçŽãªã¿ã¹ã¯ã«ãããå€ãã®æé»çãªåé¡ãå æããããšã«ãªããŸããããã®è§£æ±ºçã¯åå¥ã«èŠã€ããå¿ èŠããããŸãã ãããã®è§£æ±ºçã¯ã·ã³ãã«ã§ããããããèŠã€ããã«ã¯éåžžã«æéãããããŸãã
ã·ã§ãŒã±ãŒã¹ã®æ§ç¯ãå®è£ ããã«ã¯ã次ã®ããšãè¡ãå¿ èŠããããŸããã
- ã¹ãã¢ããã³ãã«ããŒãã£ã·ã§ã³ãè¿œå ãããµãŒãã¹ ãã¡ã€ã«ãåé€ããŸã
- Spark ãå ¥åãããœãŒã¹ ããŒã¿å ã®ç©ºã®ãã£ãŒã«ããåŠçãã
- åçŽãªåãæååã«ãã£ã¹ããã
- ãã£ãŒã«ãåãå°æåã«å€æãã
- ããŒã¿ã¢ããããŒããšHiveãžã®ããŒãã«ç»é²ãå¥ã ã«è¡ãïŒDDLäœæïŒ
- Hive ãšäºææ§ããªãå¯èœæ§ããããã£ãŒã«ãåããšã¹ã±ãŒãããããšãå¿ããªãã§ãã ããã
- Hive ã§ããŒãã«ç»é²ãæŽæ°ããæ¹æ³ãåŠç¿ããŸã
èŠçŽãããšãåºèã建èšãããšãã決å®ã«ã¯å€ãã®èœãšãç©Žãå«ãŸããŠããããšãããããŸãã ãããã£ãŠãå®è£ ã§åé¡ãçºçããå Žåã¯ãæåããå°éç¥èãæã€çµéšè±å¯ãªããŒãããŒã«é Œãããšããå§ãããŸãã
ãã®èšäºããèªã¿ããã ãããããšãããããŸããæ
å ±ãã圹ã«ç«ãŠã°å¹žãã§ãã
åºæïŒ habr.com