public interface BatchInserterIndex
BatchInserter
version of Index
. Additions/updates to a
BatchInserterIndex
doesn't necessarily gets added to the actual index
immediately, but are instead forced to be written when the index is shut
down, BatchInserterIndexProvider.shutdown()
.
To guarantee additions/updates are seen by updateOrAdd(long, Map)
,
get(String, Object)
, query(String, Object)
and
query(Object)
a call to flush()
must be made prior to
calling such a method. This enables implementations more flexibility in
making for performance optimizations.Modifier and Type | Method and Description |
---|---|
void |
add(long entityId,
Map<String,Object> properties)
Adds key/value pairs for
entity to the index. |
void |
flush()
Makes sure additions/updates can be seen by
get(String, Object) ,
query(String, Object) and query(Object) so that they
are guaranteed to return correct results. |
IndexHits<Long> |
get(String key,
Object value)
Returns exact matches from this index, given the key/value pair.
|
IndexHits<Long> |
query(Object queryOrQueryObject)
Returns matches from this index based on the supplied query object,
which can be a query string or an implementation-specific query object.
|
IndexHits<Long> |
query(String key,
Object queryOrQueryObject)
Returns matches from this index based on the supplied
key and
query object, which can be a query string or an implementation-specific
query object. |
void |
setCacheCapacity(String key,
int size)
Sets the cache size for key/value pairs for the given
key . |
void |
updateOrAdd(long entityId,
Map<String,Object> properties)
Adds key/value pairs for
entity to the index. |
void add(long entityId, Map<String,Object> properties)
entity
to the index. If there's a
previous index for entity
it will co-exist with this new one.
This behavior is because of performance reasons, to not being forced to
check if indexing for entity
already exists or not. If you need
to update indexing for entity
and it's ok with a slower indexing
process use updateOrAdd(long, Map)
instead.
Entries added to the index aren't necessarily written to the index and to
disk until BatchInserterIndexProvider.shutdown()
has been called.
Entries added to the index isn't necessarily seen by other methods:
updateOrAdd(long, Map)
, get(String, Object)
,
query(String, Object)
and query(Object)
until a call to
flush()
has been made.entityId
- the entity (i.e id of Node
or
Relationship
) to associate the key/value pairs with.properties
- key/value pairs to index for entity
.void updateOrAdd(long entityId, Map<String,Object> properties)
entity
to the index. If there's any
previous index for entity
all such indexed key/value pairs will
be deleted first. This method can be considerably slower than
add(long, Map)
because it must check if there are properties
already indexed for entity
. So if you know that there's no
previous indexing for entity
use add(long, Map)
instead.
Entries added to the index aren't necessarily written to the index and to
disk until BatchInserterIndexProvider.shutdown()
has been called.
Entries added to the index isn't necessarily seen by other methods:
updateOrAdd(long, Map)
, get(String, Object)
,
query(String, Object)
and query(Object)
until a call to
flush()
has been made. So only entries added before the most
recent flush()
are guaranteed to be found by this method.entityId
- the entity (i.e id of Node
or
Relationship
) to associate the key/value pairs with.properties
- key/value pairs to index for entity
.IndexHits<Long> get(String key, Object value)
add(long, Map)
or updateOrAdd(long, Map)
method.
Entries added to the index aren't necessarily written to the index and to
disk until BatchInserterIndexProvider.shutdown()
has been called.
Entries added to the index isn't necessarily seen by other methods:
updateOrAdd(long, Map)
, get(String, Object)
,
query(String, Object)
and query(Object)
until a call to
flush()
has been made. So only entries added before the most
recent flush()
are guaranteed to be found by this method.key
- the key in the key/value pair to match.value
- the value in the key/value pair to match.IndexHits
object. If the entire
result set isn't looped through, IndexHits.close()
must
be called before disposing of the result.IndexHits<Long> query(String key, Object queryOrQueryObject)
key
and
query object, which can be a query string or an implementation-specific
query object.
Entries added to the index aren't necessarily written to the index and
to disk until BatchInserterIndexProvider.shutdown()
has been
called. Entries added to the index isn't necessarily seen by other
methods: updateOrAdd(long, Map)
, get(String, Object)
,
query(String, Object)
and query(Object)
until a call
to flush()
has been made. So only entries added before the most
recent flush()
are guaranteed to be found by this method.key
- the key in this query.queryOrQueryObject
- the query for the key
to match.IndexHits
object. If the entire
result set isn't looped through, IndexHits.close()
must be
called before disposing of the result.IndexHits<Long> query(Object queryOrQueryObject)
BatchInserterIndexProvider.shutdown()
has been
called. Entries added to the index isn't necessarily seen by other
methods: updateOrAdd(long, Map)
, get(String, Object)
,
query(String, Object)
and query(Object)
until a call
to flush()
has been made. So only entries added before the most
recent flush()
are guaranteed to be found by this method.queryOrQueryObject
- the query to match.IndexHits
object. If the entire
result set isn't looped through, IndexHits.close()
must be
called before disposing of the result.void flush()
get(String, Object)
,
query(String, Object)
and query(Object)
so that they
are guaranteed to return correct results. Also
updateOrAdd(long, Map)
will find previous indexing correctly
after a flush.void setCacheCapacity(String key, int size)
key
.
Caching values may increase get(String, Object)
lookups significantly,
but may at the same time slow down insertion of data some.
Be sure to call this method to enable caching for keys that will be
used a lot in lookups. It's also best to call this method for your keys
right after the index has been created.key
- the key to set cache capacity for.size
- the number of values to cache results for.Copyright © 2002–2014 The Neo4j Graph Database Project. All rights reserved.