Testing Ratings & Reviews

Teams at Just Eat

Here at JUST EAT, we are constantly making improvements to the user experience of our products in order to create the best environment for our consumers to love their takeaway experience. Recently, in the Web team, we have been spending time moving the functionality in our web app into a new responsive codebase. We are prioritising areas of the code to do this with, based on where we see business need. Currently we know that one of the things our consumers are looking for from us is high quality reviews of our restaurants, so we focused on the Ratings and Reviews functionality most recently. As well as moving the code, we had requirements for a redesign of the architecture and the user interface as well. In order to achieve this objective, we created a virtual team, with representatives from Web, Automation, UX, Consumer-order API team and Rating API team – I was the test automation engineer working on this team.

Ratings & Reviews


The task at hand was to improve our customers experience of the review functionality – with the hope of getting more users to leave reviews for restaurants. From the web point of view this included moving the review code into our new responsive codebase and applying a visual redesign. My role was to provide automated testing for the web changes. This change was important because it’s the primary way that a customer gets the chance to provide their honest feedback of the meal they have ordered. This is useful from the  customer point of view since when they are placing a new order it enables them to  get a feel for their meal and sometimes help to make certain adjustments beforehand (e.g. make it more spicy) based on the review comments.

Here are some example scenarios from this requirement:

   * New rate my meal
 Scenario: Access rate my meal page from home page
    Given I am a user who has ordered before
    When I choose to rate my order from home page
    Then I am taken to the rate_my_meal page
 Scenario: Review submission with comment
   Given I am on the rate my meal page
   And I rate my order as:
     | quality  | 3 |
     | service  | 4 |
     | delivery | 4 |
   And I enter my feedback
   When I submit the review
   Then I see non-editable version of the review page
 Scenario: Order paid by cash
    Given I am a user who has placed an order with cash
    When I choose to rate my order from home page
    Then I am taken to the rate_my_meal page
    And I see the order summary:
      | Order Summary           | $order_number       |
      | Americana               | 10.00               |
      | Subtotal                | 10.00               |
      | Delivery fee            | 2.00                |
      | Total                   | £12.00              |
      | Total paid by Cash      | £12.00              |
      | Requested delivery time | $selected_time      |


Team Processes

Once the feature was broken down into a set of requirements, the Product Manager and Test Automation Engineer worked together to define the acceptance criteria as a set of feature files. I shared these feature files with the engineering team so that everyone was on the same page. In an agile environment, testing is not just an isolated activity and is performed in parallel with development. Having the tests ready is handy when it comes to rapid development, the developers unit test their code, but having functional tests ready by the time the developer commits means that less iterations are necessary.


The challenge in this task was that in order to access the rate my meal page – you needed an existing order that could be rated and reviewed. I made the test do some ground work to get to the stage where the user visited the rate my meal page. This was a prerequisite for all the tests that were checking the rating functionality. The following flow of events were needed for this prerequisite task.

  • Place an order (Make sure user has placed an order in order to review the meal)
  • Order is accepted and ready for rating (Make sure the order is in a state where the user can review it. The order status is mimicked to act as a real order via an SQL update)
  • Generate the rating code for the given new order id (The QA environment that I used for testing does not generate the email that gets generated once an order is ready to be reviewed. Due to this reason, I had to programmatically generate a rating code from API function calls. I used the Net::HTTP library to make GET and POST requests for order and rating API function calls)

First I had to find the new order id for the given legacy order id via the Order API call with a GET request. There I returned the new order id from the JSON response.

def find_new_order_id (order_id)
request_url = URI.escape("#{order_api_url}/order/#{order_id}")
req = get(request_url)

As the next step I had to generate the rating code for the given new order id via the Rating API call. It is a POST request which generates the rating code for the given order.

def generate_the_rating_code
  new_order_id = find_new_order_id @order_number
  user_id = find_the_user_id @username
  @restaurant_id = find_the_restaurant_id @order_number
  rating_api_body = {
    'customercity' => 'Testing',
    'CustomerName' => 'Test user',
    'orderid'      => new_order_id,
    'RestaurantId' => @restaurant_id,
    'userid'       => user_id
  request_url = URI.escape("#{rating_api_url}/rating")
  req = post(request_url, JSON.generate(rating_api_body))
  @rating_code = JSON.parse(req.body)['RatingCode']

  • Make sure rating code is not empty (The idea behind the rating code is to provide the ability for the user to access rate my meal page without logging into the just-eat web site. By doing so there is a security validation that would check whether the customer will only be able to see the rate my meal page for the order that they genuinely placed once the rating code is generated. So I had to include an assertion step to verify rating code is not empty in the URL when it directs to rate my meal page from different sources (from email, from home page or from order overview) and user see 403 page if it is empty)



Helping to identify issues related to the requirements at an early stage where the acceptance criteria is defined is very useful. Getting the early involvement from the perspective of a tester not only helps to validate the business logic, it also highlights any impact that this feature would have for the rest of the application.
Doing the testing in parallel to the development, so that bugs are identified quickly can make the whole development process go quicker. If we start working on step definitions before we have the application, we can make guesses at a sensible code structure based on experience, and if it differs to the end result – that conversation can be really valuable. We can also prep the data scenarios required ahead of time, and understand if we need to do something extraordinary in order to be able to test the functionality.
On several occasions, I have had the chance to pair with developers to discuss the rate my meal page code design and implementation. The conversations have helped us to identify some issues at an early stage from both the development and testing perspective and also that knowledge changed the course and direction of my automation tests and the application code in some situations. In some situations we agreed  on implementing UI elements where it helps to write automation tests easily.

Thanks for reading!

~ Deepthi