亚洲激情专区-91九色丨porny丨老师-久久久久久久女国产乱让韩-国产精品午夜小视频观看

溫馨提示×

溫馨提示×

您好,登錄后才能下訂單哦!

密碼登錄×
登錄注冊×
其他方式登錄
點擊 登錄注冊 即表示同意《億速云用戶服務條款》

elasticsearch索引index之put?mapping怎么設置

發布時間:2022-04-22 15:06:22 來源:億速云 閱讀:318 作者:iii 欄目:開發技術

本篇內容主要講解“elasticsearch索引index之put mapping怎么設置”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強。下面就讓小編來帶大家學習“elasticsearch索引index之put mapping怎么設置”吧!

    mapping機制使得elasticsearch索引數據變的更加靈活,近乎于no schema。mapping可以在建立索引時設置,也可以在后期設置。后期設置可以是修改mapping(無法對已有的field屬性進行修改,一般來說只是增加新的field)或者對沒有mapping的索引設置mapping。put mapping操作必須是master節點來完成,因為它涉及到集群matedata的修改,同時它跟index和type密切相關。修改只是針對特定index的特定type。

    在Action support分析中我們分析過幾種Action的抽象類型,put mapping Action屬于TransportMasterNodeOperationAction的子類。它實現了masterOperation方法,每個繼承自TransportMasterNodeOperationAction的子類都會根據自己的具體功能來實現這個方法。這里的實現如下所示:

    protected void masterOperation(final PutMappingRequest request, final ClusterState state, final ActionListener<PutMappingResponse> listener) throws ElasticsearchException {
            final String[] concreteIndices = clusterService.state().metaData().concreteIndices(request.indicesOptions(), request.indices());
          //構造request
            PutMappingClusterStateUpdateRequest updateRequest = new PutMappingClusterStateUpdateRequest()
                    .ackTimeout(request.timeout()).masterNodeTimeout(request.masterNodeTimeout())
                    .indices(concreteIndices).type(request.type())
                    .source(request.source()).ignoreConflicts(request.ignoreConflicts());
          //調用putMapping方法,同時傳入一個Listener
            metaDataMappingService.putMapping(updateRequest, new ActionListener<ClusterStateUpdateResponse>() {
    
                @Override
                public void onResponse(ClusterStateUpdateResponse response) {
                    listener.onResponse(new PutMappingResponse(response.isAcknowledged()));
                }
    
                @Override
                public void onFailure(Throwable t) {
                    logger.debug("failed to put mappings on indices [{}], type [{}]", t, concreteIndices, request.type());
                    listener.onFailure(t);
                }
            });
        }

    以上是TransportPutMappingAction對masterOperation方法的實現,這里并沒有多少復雜的邏輯和操作。具體操作在matedataMappingService中。跟之前的CreateIndex一樣,put Mapping也是向master提交一個updateTask。所有邏輯也都在execute方法中。這個task的基本跟CreateIndex一樣,也需要在給定的時間內響應。它的代碼如下所示:

    public void putMapping(final PutMappingClusterStateUpdateRequest request, final ActionListener<ClusterStateUpdateResponse> listener) {
        //提交一個高基本的updateTask
            clusterService.submitStateUpdateTask("put-mapping [" + request.type() + "]", Priority.HIGH, new AckedClusterStateUpdateTask<ClusterStateUpdateResponse>(request, listener) {
    
                @Override
                protected ClusterStateUpdateResponse newResponse(boolean acknowledged) {
                    return new ClusterStateUpdateResponse(acknowledged);
                }
    
                @Override
                public ClusterState execute(final ClusterState currentState) throws Exception {
                    List<String> indicesToClose = Lists.newArrayList();
                    try {
                //必須針對已經在matadata中存在的index,否則拋出異常
                        for (String index : request.indices()) {
                            if (!currentState.metaData().hasIndex(index)) {
                                throw new IndexMissingException(new Index(index));
                            }
                        }
    
                        //還需要存在于indices中,否則無法進行操作。所以這里要進行預建
                        for (String index : request.indices()) {
                            if (indicesService.hasIndex(index)) {
                                continue;
                            }
                            final IndexMetaData indexMetaData = currentState.metaData().index(index);
                  //不存在就進行創建
                            IndexService indexService = indicesService.createIndex(indexMetaData.index(), indexMetaData.settings(), clusterService.localNode().id());
                            indicesToClose.add(indexMetaData.index());
                            // make sure to add custom default mapping if exists
                            if (indexMetaData.mappings().containsKey(MapperService.DEFAULT_MAPPING)) {
                                indexService.mapperService().merge(MapperService.DEFAULT_MAPPING, indexMetaData.mappings().get(MapperService.DEFAULT_MAPPING).source(), false);
                            }
                            // only add the current relevant mapping (if exists)
                            if (indexMetaData.mappings().containsKey(request.type())) {
                                indexService.mapperService().merge(request.type(), indexMetaData.mappings().get(request.type()).source(), false);
                            }
                        }
                //合并更新Mapping
                        Map<String, DocumentMapper> newMappers = newHashMap();
                        Map<String, DocumentMapper> existingMappers = newHashMap();
                //針對每個index進行Mapping合并
                        for (String index : request.indices()) {
                            IndexService indexService = indicesService.indexServiceSafe(index);
                            // try and parse it (no need to add it here) so we can bail early in case of parsing exception
                            DocumentMapper newMapper;
                            DocumentMapper existingMapper = indexService.mapperService().documentMapper(request.type());
                            if (MapperService.DEFAULT_MAPPING.equals(request.type())) {//存在defaultmapping則合并default mapping
                                // _default_ types do not go through merging, but we do test the new settings. Also don't apply the old default
                                newMapper = indexService.mapperService().parse(request.type(), new CompressedString(request.source()), false);
                            } else {
                                newMapper = indexService.mapperService().parse(request.type(), new CompressedString(request.source()), existingMapper == null);
                                if (existingMapper != null) {
                                    // first, simulate
                                    DocumentMapper.MergeResult mergeResult = existingMapper.merge(newMapper, mergeFlags().simulate(true));
                                    // if we have conflicts, and we are not supposed to ignore them, throw an exception
                                    if (!request.ignoreConflicts() && mergeResult.hasConflicts()) {
                                        throw new MergeMappingException(mergeResult.conflicts());
                                    }
                                }
                            }
    
                            newMappers.put(index, newMapper);
                            if (existingMapper != null) {
                                existingMappers.put(index, existingMapper);
                            }
                        }
    
                        String mappingType = request.type();
                        if (mappingType == null) {
                            mappingType = newMappers.values().iterator().next().type();
                        } else if (!mappingType.equals(newMappers.values().iterator().next().type())) {
                            throw new InvalidTypeNameException("Type name provided does not match type name within mapping definition");
                        }
                        if (!MapperService.DEFAULT_MAPPING.equals(mappingType) && !PercolatorService.TYPE_NAME.equals(mappingType) && mappingType.charAt(0) == '_') {
                            throw new InvalidTypeNameException("Document mapping type name can't start with '_'");
                        }
    
                        final Map<String, MappingMetaData> mappings = newHashMap();
                        for (Map.Entry<String, DocumentMapper> entry : newMappers.entrySet()) {
                            String index = entry.getKey();
                            // do the actual merge here on the master, and update the mapping source
                            DocumentMapper newMapper = entry.getValue();
                            IndexService indexService = indicesService.indexService(index);
                            if (indexService == null) {
                                continue;
                            }
    
                            CompressedString existingSource = null;
                            if (existingMappers.containsKey(entry.getKey())) {
                                existingSource = existingMappers.get(entry.getKey()).mappingSource();
                            }
                            DocumentMapper mergedMapper = indexService.mapperService().merge(newMapper.type(), newMapper.mappingSource(), false);
                            CompressedString updatedSource = mergedMapper.mappingSource();
    
                            if (existingSource != null) {
                                if (existingSource.equals(updatedSource)) {
                                    // same source, no changes, ignore it
                                } else {
                                    // use the merged mapping source
                                    mappings.put(index, new MappingMetaData(mergedMapper));
                                    if (logger.isDebugEnabled()) {
                                        logger.debug("[{}] update_mapping [{}] with source [{}]", index, mergedMapper.type(), updatedSource);
                                    } else if (logger.isInfoEnabled()) {
                                        logger.info("[{}] update_mapping [{}]", index, mergedMapper.type());
                                    }
                                }
                            } else {
                                mappings.put(index, new MappingMetaData(mergedMapper));
                                if (logger.isDebugEnabled()) {
                                    logger.debug("[{}] create_mapping [{}] with source [{}]", index, newMapper.type(), updatedSource);
                                } else if (logger.isInfoEnabled()) {
                                    logger.info("[{}] create_mapping [{}]", index, newMapper.type());
                                }
                            }
                        }
    
                        if (mappings.isEmpty()) {
                            // no changes, return
                            return currentState;
                        }
                //根據mapping的更新情況重新生成matadata
                        MetaData.Builder builder = MetaData.builder(currentState.metaData());
                        for (String indexName : request.indices()) {
                            IndexMetaData indexMetaData = currentState.metaData().index(indexName);
                            if (indexMetaData == null) {
                                throw new IndexMissingException(new Index(indexName));
                            }
                            MappingMetaData mappingMd = mappings.get(indexName);
                            if (mappingMd != null) {
                                builder.put(IndexMetaData.builder(indexMetaData).putMapping(mappingMd));
                            }
                        }
    
                        return ClusterState.builder(currentState).metaData(builder).build();
                    } finally {
                        for (String index : indicesToClose) {
                            indicesService.removeIndex(index, "created for mapping processing");
                        }
                    }
                }
            });
        }

    到此,相信大家對“elasticsearch索引index之put mapping怎么設置”有了更深的了解,不妨來實際操作一番吧!這里是億速云網站,更多相關內容可以進入相關頻道進行查詢,關注我們,繼續學習!

    向AI問一下細節

    免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。

    AI

    浦县| 鲁甸县| 当涂县| 岐山县| 钟山县| 德州市| 东宁县| 舟曲县| 井冈山市| 甘南县| 海丰县| 白城市| 方山县| 嘉义县| 巴彦淖尔市| 古蔺县| 五家渠市| 黑河市| 无棣县| 秭归县| 定西市| 肃北| 乌兰浩特市| 保康县| 简阳市| 阆中市| 井研县| 阳谷县| 铁力市| 延边| 马尔康县| 晋宁县| 余姚市| 靖州| 临高县| 察雅县| 桓台县| 平利县| 福建省| 北京市| 资溪县|