• Home
  • Features
  • Pricing
  • Docs
  • Announcements
  • Sign In

source-academy / backend / 9c2823cf899c9314fd4ccec9dd6c3b589d83e839

04 Dec 2025 05:47PM UTC coverage: 88.716% (-0.9%) from 89.621%
9c2823cf899c9314fd4ccec9dd6c3b589d83e839

push

github

web-flow
AI-powered marking (#1248)

* feat: v1 of AI-generated comments

* feat: added logging of inputs and outputs

* Update generate_ai_comments.ex

* feat: function to save outputs to database

* Format answers json before sending to LLM

* Add LLM Prompt to question params when submitting assessment xml file

* Add LLM Prompt to api response when grading view is open

* feat: added llm_prompt from qn to raw_prompt

* feat: enabling/disabling of LLM feature by course level

* feat: added llm_grading boolean field to course creation API

* feat: added api key storage in courses & edit api key/enable llm grading

* feat: encryption for llm_api_key

* feat: added final comment editing route

* feat: added logging of chosen comments

* fix: bugs when certain fields were missing

* feat: updated tests

* formatting

* fix: error handling when calling openai API

* fix: credo issues

* formatting

* Address some comments

* Fix formatting

* rm IO.inspect

* a

* Use case instead of if

* Streamlines generate_ai_comments to only send the selected question and its relevant info + use the correct llm_prompt

* Remove unncessary field

* default: false for llm_grading

* Add proper linking between ai_comments table and submissions. Return it to submission retrieval as well

* Resolve some migration comments

* Add llm_model and llm_api_url to the DB + schema

* Moves api key, api url, llm model and course prompt to course level

* Add encryption_key to env

* Do not hardcode formatting instructions

* Add Assessment level prompts to the XML

* Return some additional info for composing of prompts

* Remove un-used 'save comments'

* Fix existing assessment tests

* Fix generate_ai_comments test cases

* Fix bug preventing avengers from generating ai comments

* Fix up tests + error msgs

* Formatting

* some mix credo suggestions

* format

* Fix credo issue

* bug fix + credo fixes

* Fix tests

* format

* Modify test.exs

* Update lib/cadet_web/controllers/gener... (continued)

118 of 174 new or added lines in 9 files covered. (67.82%)

1 existing line in 1 file now uncovered.

3758 of 4236 relevant lines covered (88.72%)

7103.93 hits per line

Source File
Press 'n' to go to next uncovered line, 'b' for previous

73.68
/lib/cadet_web/helpers/ai_comments_helpers.ex
1
defmodule CadetWeb.AICommentsHelpers do
2
  @moduledoc """
3
  Helper functions for Managing LLM related logic
4
  """
5
  require Logger
6

NEW
7
  def decrypt_llm_api_key(nil), do: nil
×
8

9
  def decrypt_llm_api_key(encrypted_key) do
10
    case Application.get_env(:openai, :encryption_key) do
4✔
11
      secret when is_binary(secret) and byte_size(secret) >= 16 ->
12
        key = binary_part(secret, 0, min(32, byte_size(secret)))
4✔
13

14
        case String.split(encrypted_key, ":", parts: 3, trim: false) do
4✔
15
          [iv_b64, tag_b64, cipher_b64] ->
16
            with {:ok, iv} <- Base.decode64(iv_b64),
4✔
17
                 {:ok, tag} <- Base.decode64(tag_b64),
4✔
18
                 {:ok, ciphertext} <- Base.decode64(cipher_b64) do
4✔
19
              case :crypto.crypto_one_time_aead(:aes_gcm, key, iv, ciphertext, "", tag, false) do
4✔
20
                plain_text when is_binary(plain_text) -> {:ok, plain_text}
4✔
NEW
21
                _ -> {:decrypt_error, :decryption_failed}
×
22
              end
23
            else
24
              _ ->
NEW
25
                Logger.error("Failed to decode one of the components of the encrypted key")
×
26
                {:decrypt_error, :invalid_format}
27
            end
28

29
          _ ->
NEW
30
            Logger.error("Encrypted key format is invalid")
×
31
            {:decrypt_error, :invalid_format}
32
        end
33

34
      _ ->
NEW
35
        Logger.error("Encryption key not configured")
×
36
        {:decrypt_error, :invalid_encryption_key}
37
    end
38
  end
39

40
  def encrypt_llm_api_key(llm_api_key) do
41
    secret = Application.get_env(:openai, :encryption_key)
3✔
42

43
    if is_binary(secret) and byte_size(secret) >= 16 do
3✔
44
      # Use first 16 bytes for AES-128, 24 for AES-192, or 32 for AES-256
45
      key = binary_part(secret, 0, min(32, byte_size(secret)))
3✔
46
      # Use AES in GCM mode for encryption
47
      iv = :crypto.strong_rand_bytes(16)
3✔
48

49
      {ciphertext, tag} =
3✔
50
        :crypto.crypto_one_time_aead(
51
          :aes_gcm,
52
          key,
53
          iv,
54
          llm_api_key,
55
          "",
56
          true
57
        )
58

59
      # Store both the IV, ciphertext and tag
60
      encrypted =
3✔
61
        Base.encode64(iv) <> ":" <> Base.encode64(tag) <> ":" <> Base.encode64(ciphertext)
62
    else
63
      {:error, :invalid_encryption_key}
64
    end
65
  end
66
end
STATUS · Troubleshooting · Open an Issue · Sales · Support · CAREERS · ENTERPRISE · START FREE · SCHEDULE DEMO
ANNOUNCEMENTS · TWITTER · TOS & SLA · Supported CI Services · What's a CI service? · Automated Testing

© 2025 Coveralls, Inc