Network Connector

Configuring SearchBlox

Before installing a network crawler, install SearchBlox successfully, then create a Custom Collection.

1860

Installing the Network Crawler

Contact [email protected] to get the download link for SearchBlox-network-crawler.

Download the latest version of SearchBlox-network-crawler. Extract the downloaded zip to /opt/searchblox-network in Linux or C: /searchblox-network in Windows.

Configuring SMB

The extracted folder will contain a folder named /conf, which contains all the configurations needed for the crawler.

Config.yml
This is the configuration file that is used to map SearchBlox to the network crawler. Edit the file in your favorite editor.

apikey: This is the API Key of your SearchBlox instance. You can find it in the Admin tab of the SearchBlox instance.

colname: Name of the custom collection which you created.

colid: The Collection ID of the collection you created. It can be found in the Collections tab near the collection name from the SearchBlox instance.

url: The URL of SearchBlox instance.

sbpkey: This is the SB-PKEY of your SearchBlox instance. You can find it in the Users tab of the SearchBlox instance. Only Admin role users will be given with SB-PKEY. Please create a admin user if you have not created one.

apikey: DD7B0E5E6BB786F10D70A86399806591
colname: custom
colid: 2
url: https://localhost:8443/
sbpkey: MNiwiA0TNlIBG0jZpWVPNuszaT/jT39G03kpF01gUpjGQK8+ZSKtQMNVqKxxke/wEthSWw==

searchblox.yml
This is the Elasticsearch configuration file that is used by SearchBlox network crawler. Edit the file in your favorite editor.

searchblox.elasticsearch.url: URL used by Elasticsearch with port. If you use IP or domain please configure this setting.

searchblox.elasticsearch.host: Hostname used for Elasticsearch.

searchblox.elasticsearch.port: Port used for Elasticsearch.

searchblox.basic.username: Username for Elasticsearch

searchblox.basic.password: Password for Elasticsearch

es.home: Windows or Linux Path to mentioned based on the OS type you use. For Linux the path will be /opt/searchblox/elasticsearch

searchblox.elasticsearch.url: https://localhost:9200/
searchblox.elasticsearch.host: localhost
searchblox.elasticsearch.port: 9200
searchblox.basic.username: searchblox
searchblox.basic.password: xxxxxxxxxxx
es.home: C:\SearchBloxServer\elasticsearch

windowsshare.yml
Enter the details of the domain server, authentication domain, username, password, folder path, disallow path, allowed format and recrawl interval in C:/searchblox-network/conf/windowsshare.yml. You can also enter details of more than one server, or more than one path in same server, in windowsshare.yml file.

You can find the details in the content of the file as shown here.

//The recrawl interval in days.
recrawl : 1
servers:
//The IP or domain of the Server.
  - server: 89.107.56.109
//The authentication domain if available it can be optional
    authentication-domain:
//The Administrator Username
    username: administrator
//The Administrator password
    password: xxxxxxxx
//The Folder path where the data need to be indexed.
    shared-folder-path: [/test/jason/pencil/]
//The disallow path inside the path that needed to be indexer.
    disallow-path: [/admin/,/js/]
//The file formats that need to be allowed for indexing
    allowed-format: [ txt,doc,docx,xls,xlsx,xltm,ppt,pptx,html,htm,pdf,odt,ods,rtf,vsd,xlsm,mpp,pps,one,potx,pub,pptm,odp,dotx,csv,docm,pot ]
//Details of another server or another AD path to crawl
  - server: 89.107.56.109
    authentication-domain:
    username: administrator
    password: xxxxxxxxxx
    shared-folder-path: [/test/jason/newone/]
    disallow-path: [/admin/,/js/]
    allowed-format: [ txt,doc,docx,xls,xlsx,xltm,ppt,pptx,html,htm,pdf,odt,ods,rtf,vsd,xlsm,mpp,pps,one,potx,pub,pptm,odp,dotx,csv,docm,pot ]

Starting the Crawler

The crawler can be started with start.sh in Linux and start.bat in Windows. The crawler starts in the background, but you can see the logs in the logs folder.

📘

Note

  • You can only run one network crawler at a time. If you need to run the crawler for different paths or different servers, enter the details in the same network crawler in the Windowsshare.yml file.

  • To re-run the crawler in another collection, delete sb_network index using a tool that can communicate with Elasticsearch.

  • Network connector has to be stopped manually.

  • If plain passwords are not allowed in your server, enable the plain password using the following line in start.bat of the network connector:
    -Djcifs.smb.client.disablePlainTextPasswords=false

Searching Securely Using SearchBlox

Enable Active Directory secure search under Search → Security settings as shown in the following.
Secure Search can be used based on Active Directory configuration by enabling the checkbox for Secured Search and entering the required settings.

  • Select Enable Secured Search and configure ldap then Test the connection.
1864
  • Enter the Active Directory details
LableDescription
LDAP URLLDAP URL that specifies base search for the entries
Search BaseSearch Base for the active directory
UsernameAdmin username
PasswordPassword for the username
Filter-TypeFilter type could be default or document.
Enable document filterEnable this option to filter search results based on users

Admin Access to File Share

If the SMB file share is available on another server on the same network and requires permission, run the SearchBlox server service with Admin access and enter your credentials. Running as Admin account or account with access to files only will help successfully index files from the share.

Make sure to run the network crawler as Admin in a similar manner.

1260

How to increase memory in Network Connector

For Windows
Go to
<network_crawler_installationPath>/start.bat
and allocate more RAM by making changes in the following line
rem set JAVA_OPTS=%JAVA_OPTS% -Xms1G -Xmx1G
instead of 1G, enter 2G or 3G.

For Linux
Go to
<network_crawler_installationPath>/start.sh
uncomment the following line and allocate more memory.
JAVA_OPTS="$JAVA_OPTS -Xms1G -Xmx1G"

Delete sb_network to rerun the crawler in another collection.

To rerun the network crawler in another collection, delete the sb_network index using a tool that can communicate with Elasticsearch.
Go to https://localhost:9200/_cat/indices and check whether you can view the sb_network index.

766

Postman can be used to access Elasticsearch.

Start Postman and create a Postman request to delete an index, use the DELETE command as shown here:

1287

Look for the "acknowledged": "true" message.

Check https://localhost:9200/_cat/indices; sb_network index should not be available among the indices.

Rerun the crawler after making necessary changes to your config.yml.