Data release page shows "Loading"

On our mirror site, when I click on “DCC Data Releases” menu, it shows a page with “Loading”. On javascript console, it dispalys the following error:

{“method”:“GET”,“transformRequest”:[null],“transformResponse”:[null],“headers”:{“Accept”:“application/json, text/plain, /”},“cache”:false,“url”:“",“cached”:false},“statusText”:"Bad Gateway”}

I checked portal log, there is a stack trace. It seems the backend tries to authenticate with a server and get “connection refused”.

Currently our mirror site doesn’t have authentication setup with We tried to disable login by setting auth.enabled to false in applicaiton.yml. It helps to avoid some error message on UI, but doesn’t solve this issue.

I am wondering why authentication is required for displaying data releases. Is there a workaround for this issue? Or the only way to make it work is setting up authentication with

I doubt it is related to authentication. I added “auth.enabled = false” in application.yml. The java exception is gone.

In varnish server config. “/api/v1/download” endpoint is not cached. It goes to portal server. It appears portal server has received the request, but somehow it doesn’t print any messages. Maybe it is just stuck in processing, but there is not abort message or anything.

I am wondering if there is database issue, either elasticsearch or postgres database setup on portal servers.

Brady, could you send me portal configuration related to the download server as well as the download server configuration?
Also, could you send me a snippet of portal and download server logs when you try to click the “DCC Data Releases” menu?

Portal server application.yml:

  port: 8080

# Management
  port: 8081

# Datasource
  driver-class-name: org.h2.Driver
  url: jdbc:h2:mem:genelist;MODE=PostgreSQL;DB_CLOSE_DELAY=-1;INIT=runscript from '/srv/dcc-portal-server/conf/schema.sql'
  max-active: 10
  max-idle: 1
  min-idle: 1

# Controls generation of "Last-Modified" and "ETag" HTTP headers
  enabled: true

  # Enable Last-Modified HTTP header generation
  enableLastModified: true

  # List of resources for which Last-Modified HTTP header should not be generated. Format: regex accepted by java.lang.String
  excludeLastModified: ["^v\\d+/short$", "^v\\d+/auth/.*$", "^v\\d+/download.*$", "^v\\d+/analysis.*$", "^v\\d+/entityset.*$", "^v\\d+/settings.*$", "^v\\d+/repository/files.*$" ]

  # Enable ETag HTTP header generation
  enableETag: false

  # List of resources for which ETag HTTP header should not be generated. Format: regex accepted by java.lang.String
  excludeETag: []

# Authentication
  ssoUrl: ""
  serviceUrl: "https://localhost:8443"
  clientId: "mgmt"
  clientSecret: "pass"
  enableHttpLogging: true
  enableStrictSSL: false

# In memory data grid
  enabled: true

  # NB: Below hazelcast settings do not have default values. The portal won't start if they're unset

  # Credentials to enter a group. E.g. dev, uat, prod groups
  groupName: "dcc-prod"
  groupPassword: "DccProdPasswd"

  # Time-to-live settings.
  # Maximum number of seconds for each entry to stay in the cache. Entries that are
  # older than the set value and not updated for that time
  # will get automatically evicted from the cache.

  # Expire inactive users in 15 minutes
  usersCacheTTL: 900

  # Expire openId authentication attempt in 2 minutes
  openidAuthTTL: 120

# ElasticSearch index and hosts
  indexName: icgc22-13
  repoIndexName: icgc-repository-20160830
     - host: lxv-icgc-elastic01
       port: 9300
  # List of TransportClient settings. 
  # E.g. 
  # "client.transport.sniff": true 
    "client.transport.sniff": true

# Mail
  enabled: false

# Data download
  enabled: true
  serverUrl: "http://lxv-icgc-download01:9090"
  publicServerUrl: "http://lxv-icgc-download01:9090"
  sharedSecret: "deadbeefdeadbeefdeadbeefdeadbeef"
  aesKey: "deadbeefdeadbeef"

  # Defines an external URL when the portal is behind a reverse proxy / load balancer. E.g. shortUrl resource uses it for generation of valid URLs
  baseUrl: https://lxv-icgc-nginx01

# ICGC Client configuration

  # CGP/DACO API endpoint
  cgpUrl: "http://localhost"

  # ShortUrl API endpoint
  shortUrl: "http://localhost"

  # CUD API endpoint
  cudUrl: "http://localhost"

  # CMS API endpoint
  cmsUrl: "http://localhost"

  # CUD credentials. Obtained from
  cudAppId: "<fill in>"
  cudUser: "<fill in>"
  cudPassword: "<fill in>"

  # OAuth credentials
  # To generate the credentials go to, log in, go to the "OAuth Settings" tab and request "Regenerate all tokens"
  consumerKey: "<fill in>"
  consumerSecret: "<fill in>"
  accessToken: "<fill in>"
  accessSecret: "<fill in>"

  # Log HTTP requests to the API
  enableHttpLogging: false

  # Disallow self-signed SSL certificates
  enableStrictSSL: true

# Set analysis configuration

  # Limit for number of items to return from ElasticSearch
  maxPreviewNumberOfHits: 1000
  maxNumberOfHits: 20000
  maxMultiplier: 3

# Release metadata configuration
  releaseDate: "placeholder"
  dataVersion: 2
# Feature Flags
  AUTH_TOKEN: true
  ICGC_CLOUD: true

  enabled: false

Download server application.yml:

# Server
  port: 9090
    keyStore: "../conf/keystore.ks"
    keyStorePassword: "private"

server.compression.enabled: true
server.compression.mime-types: application/json,application/xml,text/html,text/xml,text/plain

  security.enabled: false
  context-path: /srv-info

  sharedSecret: "xyz"
  aesKey: "xyz"
  ttlHours: 1

  exportsPath: "/icgc/export"

# Production

# Spring
  profiles: production

# Hadoop
    fs.defaultFS: hdfs://lxv-icgc-hdfs01:8020

  user: user
  password: pass

# Mongo
  uri: mongodb://localhost/dcc-download

# Spark Job configuration
  inputDir: "/icgc/input/"

Here is nginx log when I click on “Data releases” menu:

2016/09/15 05:45:09 [error] 9873#0: *5860 upstream prematurely closed connection while reading response header from upstream, client:, server:, request: "GET /api/v1/download/info HTTP/1.1", upstream: "", host: "", referrer: ""
2016/09/15 05:45:09 [error] 9873#0: *5860 upstream prematurely closed connection while reading response header from upstream, client:, server:, request: "GET /api/v1/download/info HTTP/1.1", upstream: "", host: "", referrer: ""

There is no log in portal log. That is strange. I have captured tcpdump earlier. It shows portal server has acked request at TCP level, but there is no trace in portal server log.


I noticed that the data imported into hdfs is release_21 (from The index imported into elasticsearch is release 22 (it is shown on homepage of the portal).

Is this a mismatch of data? Can this cause problem?

@vitalii Is it possible that the dummy directories that were set up to bring up a download server are the cause of the issue?

This looks like a misconfiguration issue.

sharedSecret and aesKey must be the same on both Portal and Download servers.

# Data download
  sharedSecret: "deadbeefdeadbeefdeadbeefdeadbeef"
  aesKey: "deadbeefdeadbeef"
  sharedSecret: "xyz"
  aesKey: "xyz"

Could you do the following to get some records in the portal logs?

  1. Add the requestLoggingEnabled: true configuration property to the download section of the portal’s config and restart it. The configuration will look something like:
# Data download
  requestLoggingEnabled: true
  1. Go to the portal server directly, port 8080 (do not use nginx and varnish server) and navigate to the “DCC Data Releases” menu.

There should be some log records after this. Could you send them?

Lastly, you should download a latest which contains release_22 data. Otherwise, the portal server will not properly work. You can put release_22 and release_21 directories side by side.

Hi @vitalii, I tried to download latest release_22 data from The file is still release_21 data. Is there a link for release_22 data?

@vitalii I have changed the secret/key on download server to match portal sever’s and restarted the servers. “xyz” is not the real one, I just tried to hide password/key for security and privacy reason). I still see the “Loading” issue.

Below is the log message on portal log when I visit data release page:

2016-09-15 22:48:39,769 [http-nio-8080-exec-3] INFO  c.s.j.a.c.f.LoggingFilter - 1 * Client out-bound request
1 > GET http://lxv-icgc-download01:9090/srv-info/health
1 > Content-Type: application/json; charset=utf-8

@brady Is anything else in the log after this record? That’s a snippet of a request. What was response?
Could you also check the download server log?

From you download server configuration it looks like it’s configured to have an HTTPS listener enabled, but the query goes to HTTP.

What happens when you do?

curl -v http://lxv-icgc-download01:9090/srv-info/health

BTW, is there any firewall between the servers?

@vitalii There is no log after this record. And there is log in download server. There is no reponse.

What happens when you do?
curl -v http://lxv-icgc-download01:9090/srv-info/health

bzuo@lxv-icgc-portal01:/srv/dcc-portal-server/logs$ curl -v http://lxv-icgc-download01:9090/srv-info/health
* Hostname was NOT found in DNS cache
*   Trying
* Connected to lxv-icgc-download01 ( port 9090 (#0)
> GET /srv-info/health HTTP/1.1
> User-Agent: curl/7.35.0
> Host: lxv-icgc-download01:9090
> Accept: */*

There is no firewall between them. I tried telnet to 9090 on download server, it is ok.

Looks like download server only listens on port 9090, not 443 as I expected:

bzuo@lxv-icgc-download01:/srv/dcc-download-server/logs$ sudo netstat -anp | grep java
tcp        0      0*               LISTEN      4167/java
tcp        0      0  *               LISTEN      4167/java
tcp        0      0         ESTABLISHED 4167/java
unix  2      [ ]         STREAM     CONNECTED     242499   4167/java
unix  2      [ ]         STREAM     CONNECTED     244829   4167/java

Dusan and I enabled ssl for download server. I think it makes sense. If external users need to download release file from download server, they would use https instead of http. Does download server listen on two ports, one is 9090 for internal communication, one is 443 for external downloading?

@brady, download server listens only on one port, which is 9090 in this setup.
Could you execute?

curl -k -v https://lxv-icgc-download01:9090/srv-info/health

There is no response, but it is connected over ssl:

bzuo@lxv-icgc-portal01:/tmp$ curl -k -v https://lxv-icgc-download01:9090/srv-info/health
* Hostname was NOT found in DNS cache
*   Trying
* Connected to lxv-icgc-download01 ( port 9090 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):

There is no log on download server.

@brady, I think some output is missing in this snippet. When I hit our download server with that command I receive the following:

$ curl -k -v
* Hostname was NOT found in DNS cache
*   Trying
* Connected to ( port 443 (#0)
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-DES-CBC3-SHA
* Server certificate:
*        subject: OU=Domain Control Validated; CN=*
*        start date: 2014-10-01 16:50:33 GMT
*        expire date: 2016-10-01 16:50:33 GMT
*        issuer: C=US; ST=Arizona; L=Scottsdale;, Inc.; OU=; CN=Go Daddy Secure Certificate Authority - G2
*        SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
> GET /srv-info/health HTTP/1.1
> User-Agent: curl/7.38.0
> Host:
> Accept: */*
< HTTP/1.1 401 Unauthorized
* Server Apache-Coyote/1.1 is not blacklisted
< Server: Apache-Coyote/1.1
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: 0
< Strict-Transport-Security: max-age=31536000 ; includeSubDomains
< X-Frame-Options: DENY
< Set-Cookie: JSESSIONID=70BED92E4F4C5C6CFC44553991231E93; Path=/; Secure; HttpOnly
< WWW-Authenticate: Basic realm="Realm"
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Vary: Accept-Encoding
< Date: Thu, 15 Sep 2016 15:50:26 GMT
* Connection #0 to host left intact
{"timestamp":1473954626057,"status":401,"error":"Unauthorized","message":"Full authentication is required to access this resource","path":"/srv-info/health"}

Unfortunately, I can not identify an issue when I do not have enough information.
From what I understood so far, you should update portal server configuration and set https in the download server url.

@vitalii The example out from your download server is very helpful for me to find the answer. We use a ssl certificate from service provide. I have no idea how they created the certificate. Apparently it doesn’t work for our environment. I just replaced the certificate with a self certificate, and it now shows a list of data releases.

It appears that some links are broken. For example, README.txt is not accessible. That could be because we have release 21 instead of release 22.

Vitalii, thank you very much for your help. There are multiple issues which make it harder to debug:

  1. portal server and download server keys don’t match.
  2. Portal sever use http, download server uses https.
  3. We used incompatible certificate on download server.

I am very glad we have solved all these issues! If you like, please check out the data release page at

@brady Good to here that the issue fixed.

However, it’s still impossible to download files from the download server.

In general the download server should be exposed to the Internet, because when a use downloads a file from the portal he/she is redirected to the download server. Currently your download server is located in an internal network.

Check the screenshot lxv-icgc-download01 is you local DNS name. That’s why the README.txt can’t be displayed, because it can’t be downloaded.

In portal configuration you should set publicServerUrl to some external IP address on DNS.

# Data download
  publicServerUrl: "http://lxv-icgc-download01:9090"

Check download server documentation to get a better understanding how the download server works.

@vitalii I realized I put blame on wrong place :-). Yes, we need to fix the publicServerUrl. Currently we are still working on domain name, certificate, and DNAT redirection. Once those logistics are completed, I will change this. In the mean time, I will test in internal network.

After we put correct publicServerUrl and opened port to download server, we are able to download files. We still need to reload release 22 data, other than that, it is working fine. @vitalii, thanks for all the help, this issue is now resolved.